967 resultados para deep level centres
Resumo:
Paleobathymetric assessments of fossil foraminiferal faunas play a significant role in the analysis of the paleogeographic, sedimentary, and tectonic histories of New Zealand's Neogene marine sedimentary basins. At depths >100 m, these assessments often have large uncertainties. This study, aimed at improving the precision of paleodepth assessments, documents the present-day distribution of deep-sea foraminifera (>63 µm) in 66 samples of seafloor sediment at 90-700 m water depth (outer shelf to mid-abyssal), east of New Zealand. One hundred and thirty-nine of the 465 recorded species of benthic foraminifera are new records for the New Zealand region. Characters of the foraminiferal faunas which appear to provide the most useful information for estimating paleobathymetry are, in decreasing order of reliability: relative abundance of common benthic species; benthic species associations; upper depth limits of key benthic species; and relative abundance of planktic foraminifera. R mode cluster analysis on the quantitative census data of the 58 most abundant species of benthic foraminifera produced six species associations within three higher level clusters: (1) calcareous species most abundant at mid-bathyal to outer shelf depths (<1000 m); (2) calcareous species most abundant at mid-bathyal and greater depths (>600 m); (3) agglutinated species mostly occurring at deep abyssal depths (>3000 m). A detrended correspondence analysis ordination plot exhibits a strong relationship between these species associations and bathymetry. This is manifest in the bathymetric ranges of the relative abundance peaks of many of the common benthic species (e.g., Abditodentrix pseudothalmanni 500-2800 m, Bolivina robusta 200-650 m, Bulimina marginata f. marginata 20-600 m, B. marginata f. aculeata 400-3000 m, Cassidulina norvangi 1000-4500 m, Epistominella exigua 1000-4700 m, and Trifarina angulosa 10-650 m), which should prove useful in paleobathymetric estimates. The upper depth limits of 28 benthic foraminiferal species (e.g., Fursenkoina complanata 200 m, Bulimina truncana 450 m, Melonis affinis 550 m, Eggerella bradyi 750 m, and Cassidulina norvangi 1000 m) have potential to improve the precision of paleobathymetric estimates based initially on the total faunal composition. The planktic percentage of foraminiferal tests increases from outer shelf to upper abyssal depths followed by a rapid decline within the foraminiferal lysocline (below c. 3600 m). A planktic percentage <50% is suggestive of shelf depths, and >50% is suggestive of bathyal or abyssal depths above the CCD. In the abyssal zone there is dramatic taphonomic loss of most agglutinated tests (except some textulariids) at burial depths of 0.1-0.2 m, which negates the potential usefulness of these taxa in paleobathymetric assessments.
Resumo:
Strontium isotopes are useful tracers of fluid-rock interaction in marine hydrothermal systems and provide a potential way to quantify the amount of seawater that passes through these systems. We have determined the whole-rock Sr-isotopic compositions of a section of upper oceanic crust that formed at the fast-spreading East Pacific Rise, now exposed at Hess Deep. This dataset provides the first detailed comparison for the much-studied Ocean Drilling Program (ODP) drill core from Site 504B. Whole-rock and mineral Sr concentrations indicate that Sr-exchange between hydrothermal fluids and the oceanic crust is complex, being dependent on the mineralogical reactions occurring; in particular, epidote formation takes up Sr from the fluid increasing the 87Sr/86Sr of the bulk-rock. Calculating the fluid-flux required to shift the Sr-isotopic composition of the Hess Deep sheeted-dike complex, using the approach of Bickle and Teagle (1992, doi:10.1016/0012-821X(92)90221-G) gives a fluid-flux similar to that determined for ODP Hole 504B. This suggests that the level of isotopic exchange observed in these two regions is probably typical for modern oceanic crust. Unfortunately, uncertainties in the modeling approach do not allow us to determine a fluid-flux that is directly comparable to fluxes calculated by other methods.
Resumo:
Three sites were drilled in the Izu-Bonin forearc basin during Ocean Drilling Program (ODP) Leg 126. High-quality formation microscanner (FMS) data from two of the sites provide images of part of a thick, volcaniclastic, middle to upper Oligocene, basin-plain turbidite succession. The FMS images were used to construct bed-by-bed sedimentary sections for the depth intervals 2232-2441 m below rig floor (mbrf) in Hole 792E, and 4023-4330 mbrf in Hole 793B. Beds vary in thickness from those that are near or below the resolution of the FMS tool (2.5 cm) to those that are 10-15 m thick. The bed thicknesses are distributed according to a power law with an exponent of about 1.0. There are no obvious upward thickening or thinning sequences in the bed-by-bed sections. Spaced packets of thick and very thick beds may be a response to (1) low stands of global sea level, particularly at 30 Ma, (2) periods of increased tectonic uplift, or (3) periods of more intense volcanism. Graded sandstones, most pebbly sandstones, and graded to graded-stratified conglomerates were deposited by turbidity currents. The very thick, mainly structureless beds of sandstone, pebbly sandstone, and pebble conglomerate are interpreted as sandy debris-flow deposits. Many of the sediment gravity flows may have been triggered by earthquakes. Long recurrence intervals of 0.3-1 m.y. for the very thickest beds are consistent with triggering by large-magnitude earthquakes (M = 9) with epicenters approximately 10-50 km away from large, unstable accumulations of volcaniclastic sand and ash on the flanks of arc volcanoes. Paleocurrents were obtained from the grain fabric of six thicker sandstone beds, and ripple migration directions in about 40 thinner beds; orientations were constrained by the FMS images. The data from ripples are very scattered and cannot be used to specify source positions. They do, however, indicate that the paleoenvironment was a basin plain where weaker currents were free to follow a broad range of flow paths. The data from sandstone fabric are more reliable and indicate that turbidity currents flowed toward 150? during the time period from 28.9 to 27.3 Ma. This direction is essentially along the axis of the forearc basin, from north to south, with a small component of flow away from the western margin of the basin.
Resumo:
We studied two deep-sea cores from the Scotia Sea to reconstruct past atmospheric circulation in the southern hemisphere and to resolve a long-standing debate on the interpretation of magnetic susceptibility (MS) records in Southern Ocean (SO) sediment. High-sedimentation sites MD07-3134 (0.2 - 1.2 m/kyr) and MD07-3133 (0.3 - 2 m/kyr) cover the last 92.5 kyr and 36 kyr, respectively. Both exhibit a one-to-one coupling of the MS and Ca2+ signal to the non-sea salt (nss) Ca2+ signal of the EDML ice core, clearly identifying atmospheric circulation as means of distribution. Comparison of additional proxies also excludes major influence by volcanic sources, sea-ice, icebergs, or oceanic current transport. The close resemblance of the dust proxies over the last glacial cycle, in turn, allows for the establishment of an age model of unprecedented resolution and precision for SO deep-sea sediment because atmospheric transport involves no major leads or lags. This is of particular importance because MS is routinely measured on deep-sea cores in the SO but the sediments usually lack biogenic carbonate and therefore had only limited stratigraphic control so far. Southern South America (SSA) is the likely source of eolian material because Site MD07-3133, located closer to the continent, has slightly higher MS values than Site MD07-3134, and also the MS record of Patagonian Site SALSA shows comparable variability. Patagonia was the dust source for both the Scotia Sea and East Antarctica. Dust fluxes were several times higher during glacial times, when atmospheric circulation was either stronger or shifted in latitude, sea level was lowered, shelf surfaces were exposed, and environmental conditions in SSA were dominated by glaciers and extended outwash plains. Hence, MS records of SO deep-sea sediment are reliable tracers of atmospheric circulation, allowing for chronologically-constrained reconstructions of the circum Antarctic paleoclimate history.
Resumo:
The abundances and distribution of metazoan within-ice meiofauna (13 stations) and under-ice fauna (12 stations) were investigated in level sea ice and sea-ice ridges in the Chukchi/Beaufort Seas and Canada Basin in June/July 2005 using a combination of ice coring and SCUBA diving. Ice meiofauna abundance was estimated based on live counts in the bottom 30 cm of level sea ice based on triplicate ice core sampling at each location, and in individual ice chunks from ridges at four locations. Under-ice amphipods were counted in situ in replicate (N=24-65 per station) 0.25 m**2 quadrats using SCUBA to a maximum water depth of 12 m. In level sea ice, the most abundant ice meiofauna groups were Turbellaria (46%), Nematoda (35%), and Harpacticoida (19%), with overall low abundances per station that ranged from 0.0 to 10.9 ind/l (median 0.8 ind/l). In level ice, low ice algal pigment concentrations (<0.1-15.8 µg Chl a /l), low brine salinities (1.8-21.7) and flushing from the melting sea ice likely explain the low ice meiofauna concentrations. Higher abundances of Turbellaria, Nematoda and Harpacticoida also were observed in pressure ridges (0-200 ind/l, median 40 ind/l), although values were highly variable and only medians of Turbellaria were significantly higher in ridge ice than in level ice. Median abundances of under-ice amphipods at all ice types (level ice, various ice ridge structures) ranged from 8 to 114 ind/m**2 per station and mainly consisted of Apherusa glacialis (87%), Onisimus spp. (7%) and Gammarus wilkitzkii (6%). Highest amphipod abundances were observed in pressure ridges at depths >3 m where abundances were up to 42-fold higher compared with level ice. We propose that the summer ice melt impacted meiofauna and under-ice amphipod abundance and distribution through (a) flushing, and (b) enhanced salinity stress at thinner level sea ice (less than 3 m thickness). We further suggest that pressure ridges, which extend into deeper, high-salinity water, become accumulation regions for ice meiofauna and under-ice amphipods in summer. Pressure ridges thus might be crucial for faunal survival during periods of enhanced summer ice melt. Previous estimates of Arctic sea ice meiofauna and under-ice amphipods on regional and pan-Arctic scales likely underestimate abundances at least in summer because they typically do not include pressure ridges.
Resumo:
The Zambezi deep-sea fan, the largest of its kind along the east African continental margin, is poorly studied to date, despite its potential to record marine and terrestrial climate signals in the southwest Indian Ocean. Therefore, gravity core GeoB 9309-1, retrieved from 1219 m water depth, was investigated for various geophysical (magnetic susceptibility, porosity, colour reflectance) and geochemical (pore water and sediment geochemistry, Fe and P speciation) properties. Onboard and onshore data documented a sulphate/methane transition (SMT) zone at ~ 450-530 cm sediment depth, where the simultaneous consumption of pore water sulphate and methane liberates hydrogen sulphide and bi-carbonate into the pore space. This leads to characteristic changes in the sediment and pore water chemistry, as the reduction of primary Fe (oxyhydr)oxides, the precipitation of Fe sulphides, and the mobilization of Fe (oxyhydr)oxide-bound P. These chemical processes also lead to a marked decrease in magnetic susceptibility. Below the SMT, we find a reduction of porosity, possibly due to pore space cementation by authigenic minerals. Formation of the observed geochemical, magnetic and mineralogical patterns requires a fixation of the SMT at this distinct sediment depth for a considerable time-which we calculated to be ~ 10 000 years assuming steady-state conditions-following a period of rapid upward migration towards this interval. We postulate that the worldwide sea-level rise at the last glacial/interglacial transition (~ 10 000 years B.P.) most probably caused the fixation of the SMT at its present position, through drastically reduced sediment delivery to the deep-sea fan. In addition, we report an internal redistribution of P occurring around the SMT, closely linked to the (de)coupling of sedimentary Fe and P, and leaving a characteristic pattern in the solid P record. By phosphate re-adsorption onto Fe (oxyhydr)oxides above, and formation of authigenic P minerals (e.g. vivianite) below the SMT, deep-sea fan deposits may potentially act as long-term sinks for P.
Resumo:
The middle Miocene delta18O increase represents a fundamental change in earth's climate system due to a major expansion and permanent establishment of the East Antarctic Ice Sheet accompanied by some effect of deepwater cooling. The long-term cooling trend in the middle to late Miocene was superimposed by several punctuated periods of glaciations (Mi-Events) characterized by oxygen isotopic shifts that have been related to the waxing and waning of the Antarctic ice-sheet and bottom water cooling. Here, we present a high-resolution benthic stable oxygen isotope record from ODP Site 1085 located at the southwestern African continental margin that provides a detailed chronology for the middle to late Miocene (13.9-7.3 Ma) climate transition in the eastern South Atlantic. A composite Fe intensity record obtained by XRF core scanning ODP Sites 1085 and 1087 was used to construct an astronomically calibrated chronology based on orbital tuning. The oxygen isotope data exhibit four distinct delta18O excursions, which have astronomical ages of 13.8, 13.2, 11.7, and 10.4 Ma and correspond to the Mi3, Mi4, Mi5, and Mi6 events. A global climate record was extracted from the oxygen isotopic composition. Both long- and short-term variabilities in the climate record are discussed in terms of sea-level and deep-water temperature changes. The oxygen isotope data support a causal link between sequence boundaries traced from the shelf and glacioeustatic changes due to ice-sheet growth. Spectral analysis of the benthic delta18O record shows strong power in the 400-kyr and 100-kyr bands documenting a paleoceanographic response to eccentricity-modulated variations in precession. A spectral peak around 180-kyr might be related to the asymmetry of the obliquity cycle indicating that the response of the dominantly unipolar Antarctic ice-sheet to obliquityinduced variations probably controlled the middle to late Miocene climate system. Maxima in the delta18O record, interpreted as glacial periods, correspond to minima in 100-kyr eccentricity cycle and minima in the 174-kyr obliquity modulation. Strong middle to late Miocene glacial events are associated with 400-kyr eccentricity minima and obliquity modulation minima. Thus, fluctuations in the amplitude of obliquity and eccentricity seem to be the driving force for the middle to late Miocene climate variability.
Resumo:
The DEEP site sediment sequence obtained during the ICDP SCOPSCO project at Lake Ohrid was dated using tephrostratigraphic information, cyclostratigraphy, and orbital tuning through the marine isotope stages (MIS) 15-1. Although this approach is suitable for the generation of a general chronological framework of the long succession, it is insufficient to resolve more detailed palaeoclimatological questions, such as leads and lags of climate events between marine and terrestrial records or between different regions. Here, we demonstrate how the use of different tie points can affect cyclostratigraphy and orbital tuning for the period between ca. 140 and 70 ka and how the results can be correlated with directly/indirectly radiometrically dated Mediterranean marine and continental proxy records. The alternative age model presented here shows consistent differences with that initially proposed by Francke et al. (2015) for the same interval, in particular at the level of the MIS6-5e transition. According to this new age model, different proxies from the DEEP site sediment record support an increase of temperatures between glacial to interglacial conditions, which is almost synchronous with a rapid increase in sea surface temperature observed in the western Mediterranean. The results show how a detailed study of independent chronological tie points is important to align different records and to highlight asynchronisms of climate events. Moreover, Francke et al. (2016) have incorporated the new chronology proposed for tephra OH-DP-0499 in the final DEEP age model. This has reduced substantially the chronological discrepancies between the DEEP site age model and the model proposed here for the last glacial-interglacial transition.
Resumo:
This paper presents a simple gravity evaluation model for large reflector antennas and the experimental example for a case study of one uplink array of 4x35-m antennas at X and Ka band. This model can be used to evaluate the gain reduction as a function of the maximum gravity distortion, and also to specify this at system designer level. The case study consists of one array of 35-m antennas for deep space missions. Main issues due to the gravity effect have been explored with Monte Carlo based simulation analysis.
Resumo:
When the fresh fruit reaches the final markets from the suppliers, its quality is not always as good as it should, either because it has been mishandled during transportation or because it lacks an adequate quality control at the producer level, before being shipped. This is why it is necessary for the final markets to establish their own quality assessment system if they want to ensure to their customers the quality they want to sell. In this work, a system to control fruit quality at the last level of the distribution channel has been designed. The system combines rapid control techniques with laboratory equipment and statistical sampling protocols, to obtain a dynamic, objective process, which can substitute advantageously the quality control inspections carried out visually by human experts at the reception platform of most hypermarkets. Portable measuring equipment have been chosen (firmness tester, temperature and humidity sensors...) as well as easy-to-use laboratory equipment (texturometer, colorimeter, refractometer..,) combining them to control the most important fruit quality parameters (firmness, colour, sugars, acids). A complete computer network has been designed to control all the processes and store the collected data in real time, and to perform the computations. The sampling methods have been also defined to guarantee the confidence of the results. Some of the advantages of a quality assessment system as the proposed one are: the minimisation of human subjectivity, the ability to use modern measuring techniques, and the possibility of using it also as a supplier's quality control system. It can be also a way to clarify the quality limits of fruits among members of the commercial channel, as well as the first step in the standardisation of quality control procedures.
Resumo:
The extreme runup is a key parameter for a shore risk analysis in which the accurate and quantitative estimation of the upper limit reached by waves is essential. Runup can be better approximated by splitting the setup and swash semi-amplitude contributions. In an experimental study recording setup becomes difficult due to infragravity motions within the surf zone, hence, it would be desirable to measure the setup with available methodologies and devices. In this research, an analysis is made of evaluated the convenience of direct estimation setup as the medium level in the swash zone for experimental runup analysis through a physical model. A physical mobile bed model was setup in a wave flume at the Laboratory for Maritime Experimentation of CEDEX. The wave flume is 36 metres long, 6.5 metres wide and 1.3 metres high. The physical model was designed to cover a reasonable range of parameters, three different slopes (1/50, 1/30 and 1/20), two sand grain sizes (D50 = 0.12 mm and 0.70 mm) and a range for the Iribarren number in deep water (ξ0) from 0.1 to 0.6. Best formulations were chosen for estimating a theoretical setup in the physical model application. Once theoretical setup had been obtained, a comparison was made with an estimation of the setup directly as a medium level of the oscillation in swash usually considered in extreme runup analyses. A good correlation was noted between both theoretical and time-averaging setup and a relation is proposed. Extreme runup is analysed through the sum of setup and semi-amplitude of swash. An equation is proposed that could be applied in strong foreshore slope-dependent reflective beaches.
Design and Simulation of Deep Nanometer SRAM Cells under Energy, Mismatch, and Radiation Constraints
Resumo:
La fiabilidad está pasando a ser el principal problema de los circuitos integrados según la tecnología desciende por debajo de los 22nm. Pequeñas imperfecciones en la fabricación de los dispositivos dan lugar ahora a importantes diferencias aleatorias en sus características eléctricas, que han de ser tenidas en cuenta durante la fase de diseño. Los nuevos procesos y materiales requeridos para la fabricación de dispositivos de dimensiones tan reducidas están dando lugar a diferentes efectos que resultan finalmente en un incremento del consumo estático, o una mayor vulnerabilidad frente a radiación. Las memorias SRAM son ya la parte más vulnerable de un sistema electrónico, no solo por representar más de la mitad del área de los SoCs y microprocesadores actuales, sino también porque las variaciones de proceso les afectan de forma crítica, donde el fallo de una única célula afecta a la memoria entera. Esta tesis aborda los diferentes retos que presenta el diseño de memorias SRAM en las tecnologías más pequeñas. En un escenario de aumento de la variabilidad, se consideran problemas como el consumo de energía, el diseño teniendo en cuenta efectos de la tecnología a bajo nivel o el endurecimiento frente a radiación. En primer lugar, dado el aumento de la variabilidad de los dispositivos pertenecientes a los nodos tecnológicos más pequeños, así como a la aparición de nuevas fuentes de variabilidad por la inclusión de nuevos dispositivos y la reducción de sus dimensiones, la precisión del modelado de dicha variabilidad es crucial. Se propone en la tesis extender el método de inyectores, que modela la variabilidad a nivel de circuito, abstrayendo sus causas físicas, añadiendo dos nuevas fuentes para modelar la pendiente sub-umbral y el DIBL, de creciente importancia en la tecnología FinFET. Los dos nuevos inyectores propuestos incrementan la exactitud de figuras de mérito a diferentes niveles de abstracción del diseño electrónico: a nivel de transistor, de puerta y de circuito. El error cuadrático medio al simular métricas de estabilidad y prestaciones de células SRAM se reduce un mínimo de 1,5 veces y hasta un máximo de 7,5 a la vez que la estimación de la probabilidad de fallo se mejora en varios ordenes de magnitud. El diseño para bajo consumo es una de las principales aplicaciones actuales dada la creciente importancia de los dispositivos móviles dependientes de baterías. Es igualmente necesario debido a las importantes densidades de potencia en los sistemas actuales, con el fin de reducir su disipación térmica y sus consecuencias en cuanto al envejecimiento. El método tradicional de reducir la tensión de alimentación para reducir el consumo es problemático en el caso de las memorias SRAM dado el creciente impacto de la variabilidad a bajas tensiones. Se propone el diseño de una célula que usa valores negativos en la bit-line para reducir los fallos de escritura según se reduce la tensión de alimentación principal. A pesar de usar una segunda fuente de alimentación para la tensión negativa en la bit-line, el diseño propuesto consigue reducir el consumo hasta en un 20 % comparado con una célula convencional. Una nueva métrica, el hold trip point se ha propuesto para prevenir nuevos tipos de fallo debidos al uso de tensiones negativas, así como un método alternativo para estimar la velocidad de lectura, reduciendo el número de simulaciones necesarias. Según continúa la reducción del tamaño de los dispositivos electrónicos, se incluyen nuevos mecanismos que permiten facilitar el proceso de fabricación, o alcanzar las prestaciones requeridas para cada nueva generación tecnológica. Se puede citar como ejemplo el estrés compresivo o extensivo aplicado a los fins en tecnologías FinFET, que altera la movilidad de los transistores fabricados a partir de dichos fins. Los efectos de estos mecanismos dependen mucho del layout, la posición de unos transistores afecta a los transistores colindantes y pudiendo ser el efecto diferente en diferentes tipos de transistores. Se propone el uso de una célula SRAM complementaria que utiliza dispositivos pMOS en los transistores de paso, así reduciendo la longitud de los fins de los transistores nMOS y alargando los de los pMOS, extendiéndolos a las células vecinas y hasta los límites de la matriz de células. Considerando los efectos del STI y estresores de SiGe, el diseño propuesto mejora los dos tipos de transistores, mejorando las prestaciones de la célula SRAM complementaria en más de un 10% para una misma probabilidad de fallo y un mismo consumo estático, sin que se requiera aumentar el área. Finalmente, la radiación ha sido un problema recurrente en la electrónica para aplicaciones espaciales, pero la reducción de las corrientes y tensiones de los dispositivos actuales los está volviendo vulnerables al ruido generado por radiación, incluso a nivel de suelo. Pese a que tecnologías como SOI o FinFET reducen la cantidad de energía colectada por el circuito durante el impacto de una partícula, las importantes variaciones de proceso en los nodos más pequeños va a afectar su inmunidad frente a la radiación. Se demuestra que los errores inducidos por radiación pueden aumentar hasta en un 40 % en el nodo de 7nm cuando se consideran las variaciones de proceso, comparado con el caso nominal. Este incremento es de una magnitud mayor que la mejora obtenida mediante el diseño de células de memoria específicamente endurecidas frente a radiación, sugiriendo que la reducción de la variabilidad representaría una mayor mejora. ABSTRACT Reliability is becoming the main concern on integrated circuit as the technology goes beyond 22nm. Small imperfections in the device manufacturing result now in important random differences of the devices at electrical level which must be dealt with during the design. New processes and materials, required to allow the fabrication of the extremely short devices, are making new effects appear resulting ultimately on increased static power consumption, or higher vulnerability to radiation SRAMs have become the most vulnerable part of electronic systems, not only they account for more than half of the chip area of nowadays SoCs and microprocessors, but they are critical as soon as different variation sources are regarded, with failures in a single cell making the whole memory fail. This thesis addresses the different challenges that SRAM design has in the smallest technologies. In a common scenario of increasing variability, issues like energy consumption, design aware of the technology and radiation hardening are considered. First, given the increasing magnitude of device variability in the smallest nodes, as well as new sources of variability appearing as a consequence of new devices and shortened lengths, an accurate modeling of the variability is crucial. We propose to extend the injectors method that models variability at circuit level, abstracting its physical sources, to better model sub-threshold slope and drain induced barrier lowering that are gaining importance in FinFET technology. The two new proposed injectors bring an increased accuracy of figures of merit at different abstraction levels of electronic design, at transistor, gate and circuit levels. The mean square error estimating performance and stability metrics of SRAM cells is reduced by at least 1.5 and up to 7.5 while the yield estimation is improved by orders of magnitude. Low power design is a major constraint given the high-growing market of mobile devices that run on battery. It is also relevant because of the increased power densities of nowadays systems, in order to reduce the thermal dissipation and its impact on aging. The traditional approach of reducing the voltage to lower the energy consumption if challenging in the case of SRAMs given the increased impact of process variations at low voltage supplies. We propose a cell design that makes use of negative bit-line write-assist to overcome write failures as the main supply voltage is lowered. Despite using a second power source for the negative bit-line, the design achieves an energy reduction up to 20% compared to a conventional cell. A new metric, the hold trip point has been introduced to deal with new sources of failures to cells using a negative bit-line voltage, as well as an alternative method to estimate cell speed, requiring less simulations. With the continuous reduction of device sizes, new mechanisms need to be included to ease the fabrication process and to meet the performance targets of the successive nodes. As example we can consider the compressive or tensile strains included in FinFET technology, that alter the mobility of the transistors made out of the concerned fins. The effects of these mechanisms are very dependent on the layout, with transistor being affected by their neighbors, and different types of transistors being affected in a different way. We propose to use complementary SRAM cells with pMOS pass-gates in order to reduce the fin length of nMOS devices and achieve long uncut fins for the pMOS devices when the cell is included in its corresponding array. Once Shallow Trench isolation and SiGe stressors are considered the proposed design improves both kinds of transistor, boosting the performance of complementary SRAM cells by more than 10% for a same failure probability and static power consumption, with no area overhead. While radiation has been a traditional concern in space electronics, the small currents and voltages used in the latest nodes are making them more vulnerable to radiation-induced transient noise, even at ground level. Even if SOI or FinFET technologies reduce the amount of energy transferred from the striking particle to the circuit, the important process variation that the smallest nodes will present will affect their radiation hardening capabilities. We demonstrate that process variations can increase the radiation-induced error rate by up to 40% in the 7nm node compared to the nominal case. This increase is higher than the improvement achieved by radiation-hardened cells suggesting that the reduction of process variations would bring a higher improvement.
Resumo:
Using the coupled climate model CLIMBER-3α, we investigate changes in sea surface elevation due to a weakening of the thermohaline circulation (THC). In addition to a global sea level rise due to a warming of the deep sea, this leads to a regional dynamic sea level change which follows quasi-instantaneously any change in the ocean circulation. We show that the magnitude of this dynamic effect can locally reach up to ~1m, depending on the initial THC strength. In some regions the rate of change can be up to 20-25 mm/yr. The emerging patterns are discussed with respect to the oceanic circulation changes. Most prominent is a south-north gradient reflecting the changes in geostrophic surface currents. Our results suggest that an analysis of observed sea level change patterns could be useful for monitoring the THC strength.
Resumo:
Chandra data in the COSMOS, AEGIS-XD and 4 Ms Chandra Deep Field South are combined with multiwavelength photometry available in those fields to determine the rest-frame U − V versus V − J colours of X-ray AGN hosts in the redshift intervals 0.1 < z < 0.6 (mean z¯=0.40) and 0.6 < z < 1.2 (mean z¯=0.85). This combination of colours provides an effective and least model-dependent means of separating quiescent from star-forming, including dust reddened, galaxies. Morphological information emphasizes differences between AGN populations split by their U − V versus V − J colours. AGN in quiescent galaxies consist almost exclusively of bulges, while star-forming hosts are equally split between early- and late-type hosts. The position of AGN hosts on the U − V versusV − J diagram is then used to set limits on the accretion density of the Universe associated with evolved and star-forming systems independent of dust induced biases. It is found that most of the black hole growth at z≈ 0.40 and 0.85 is associated with star-forming hosts. Nevertheless, a non-negligible fraction of the X-ray luminosity density, about 15–20 per cent, at both z¯=0.40 and 0.85, is taking place in galaxies in the quiescent region of the U − V versus V − J diagram. For the low-redshift sub-sample, 0.1 < z < 0.6, we also find tentative evidence, significant at the 2σ level, that AGN split by their U − V and V − J colours have different Eddington ratio distributions. AGN in blue star-forming hosts dominate at relatively high Eddington ratios. In contrast, AGN in red quiescent hosts become increasingly important as a fraction of the total population towards low Eddington ratios. At higher redshift, z > 0.6, such differences are significant at the 2σ level only for sources with Eddington ratios ≳ 10^− 3. These findings are consistent with scenarios in which diverse accretion modes are responsible for the build-up of supermassive black holes at the centres of galaxies. We compare these results with the predictions of theGALFORM semi-analytic model for the cosmological evolution of AGN and galaxies. This model postulates two black hole fuelling modes, the first is linked to star formation events and the second takes place in passive galaxies. GALFORM predicts that a substantial fraction of the black hole growth at z < 1 is associated with quiescent galaxies, in apparent conflict with the observations. Relaxing the strong assumption of the model that passive AGN hosts have zero star formation rate could bring those predictions in better agreement with the data.