957 resultados para data dependence


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a suite of new high-resolution records (0-135 ka) representing pulses of aeolian, fluvial, and biogenic sedimentation along the Senegalese continental margin. A multiproxy approach based on rock magnetic, element, and color data was applied on three cores enclosing the present-day northern limit of the ITCZ. A strong episodic aeolian contribution driven by stronger winds and dry conditions and characterized by high hematite and goethite input was revealed north of 13°N. These millennial-scale dust fluxes are synchronous with North Atlantic Heinrich stadials. Fluvial clay input driven by the West African monsoon predominates at 12°N and varies at Dansgaard-Oeschger time scales while marine productivity is strongly enhanced during the African humid periods and marine isotope stage 5. From latitudinal signal variations, we deduce that the last glacial ITCZ summer position was located between core positions at 12°26' and 13°40'N. Furthermore, this work also shows that submillennial periods of aridity over northwest Africa occurred more frequently and farther south than previously thought.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The volumetric magnetic susceptibility was measured at frequencies of 300 and 3000 Hz in a static field of 300 mA/m using a Magnon International VSFM in the Laboratory for Environmental- and Palaeomagnetism at the University of Bayreuth. The magnetic susceptibility was mass-normalised. The frequency dependence was calculated as MSfd = (MSlf - MShf) / MSlf *100 [%]. A spectrophotometer (Konica Minolta CM-5) was used to determine the colour of dried and homogenised sediment samples by detecting the diffused reflected light under standardised observation conditions (2° Standard Observer, Illuminant C). Colour spectra were obtained in the visible range (360 to 740 nm), in 10 nm increments, and the spectral data was converted into the Munsell colour system and the CIELAB Colour Space (L*a*b*, CIE 1976) using the Software SpectraMagic NX (Konica Minolta). The measurement of the particle size was conducted by using a Laser Diffraction Particle Size Analyzer (Beckman Coulter LS 13 320 PIDS) and by calculating the mean diameters of the particles within a size range of 0.04 - 2000 µm. Each sample was measured two times in two different concentrations to increase accuracy. Finally all measurements with reliable obscuration (8 - 12 %) were averaged.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Arctic Ocean is warming at two to three times the global rate and is perceived to be a bellwether for ocean acidification. Increased CO2 concentrations are expected to have a fertilization effect on marine autotrophs, and higher temperatures should lead to increased rates of planktonic primary production. Yet, simultaneous assessment of warming and increased CO2 on primary production in the Arctic has not been conducted. Here we test the expectation that CO2-enhanced gross primary production (GPP) may be temperature dependent, using data from several oceanographic cruises and experiments from both spring and summer in the European sector of the Arctic Ocean. Results confirm that CO2 enhances GPP (by a factor of up to ten) over a range of 145-2,099?µatm; however, the greatest effects are observed only at lower temperatures and are constrained by nutrient and light availability to the spring period. The temperature dependence of CO2-enhanced primary production has significant implications for metabolic balance in a warmer, CO2-enriched Arctic Ocean in the future. In particular, it indicates that a twofold increase in primary production during the spring is likely in the Arctic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report on the sensitivity of the superconducting critical temperature (TC) to layer thickness, as well as on TC reproducibility in Mo/Au bilayers. Resistivity measurements on samples with a fixed Au thickness (dAu) and Mo thickness (dMo) ranging from 50 to 250 nm, and with a fixed dMo and different dAu thickness are shown. Experimental data are discussed in the framework of Martinis model, whose application to samples with dAu above their coherence length is analysed in detail. Results show a good coupling between normal and superconducting layers and excellent TC reproducibility, allowing to accurately correlate Mo layer thickness and bilayer TC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nitrogen sputtering yields as high as 104 atoms/ion, are obtained by irradiating N-rich-Cu3N films (N concentration: 33 ± 2 at.%) with Cu ions at energies in the range 10?42 MeV. The kinetics of N sputtering as a function of ion fluence is determined at several energies (stopping powers) for films deposited on both, glass and silicon substrates. The kinetic curves show that the amount of nitrogen release strongly increases with rising irradiation fluence up to reaching a saturation level at a low remaining nitrogen fraction (5?10%), in which no further nitrogen reduction is observed. The sputtering rate for nitrogen depletion is found to be independent of the substrate and to linearly increase with electronic stopping power (Se). A stopping power (Sth) threshold of ?3.5 keV/nm for nitrogen depletion has been estimated from extrapolation of the data. Experimental kinetic data have been analyzed within a bulk molecular recombination model. The microscopic mechanisms of the nitrogen depletion process are discussed in terms of a non-radiative exciton decay model. In particular, the estimated threshold is related to a minimum exciton density which is required to achieve efficient sputtering rates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A validation of the burn-up simulation system EVOLCODE 2.0 is presented here, involving the experimental measurement of U and Pu isotopes and some fission fragments production ratios after a burn-up of around 30 GWd/tU in a Pressurized Light Water Reactor (PWR). This work provides an in-depth analysis of the validation results, including the possible sources of the uncertainties. An uncertainty analysis based on the sensitivity methodology has been also performed, providing the uncertainties in the isotopic content propagated from the cross sections uncertainties. An improvement of the classical Sensitivity/ Uncertainty (S/U) model has been developed to take into account the implicit dependence of the neutron flux normalization, that is, the effect of the constant power of the reactor. The improved S/U methodology, neglected in this kind of studies, has proven to be an important contribution to the explanation of some simulation-experiment discrepancies for which, in general, the cross section uncertainties are, for the most relevant actinides, an important contributor to the simulation uncertainties, of the same order of magnitude and sometimes even larger than the experimental uncertainties and the experiment- simulation differences. Additionally, some hints for the improvement of the JEFF3.1.1 fission yield library and for the correction of some errata in the experimental data are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Los Centros de Datos se encuentran actualmente en cualquier sector de la economía mundial. Están compuestos por miles de servidores, dando servicio a los usuarios de forma global, las 24 horas del día y los 365 días del año. Durante los últimos años, las aplicaciones del ámbito de la e-Ciencia, como la e-Salud o las Ciudades Inteligentes han experimentado un desarrollo muy significativo. La necesidad de manejar de forma eficiente las necesidades de cómputo de aplicaciones de nueva generación, junto con la creciente demanda de recursos en aplicaciones tradicionales, han facilitado el rápido crecimiento y la proliferación de los Centros de Datos. El principal inconveniente de este aumento de capacidad ha sido el rápido y dramático incremento del consumo energético de estas infraestructuras. En 2010, la factura eléctrica de los Centros de Datos representaba el 1.3% del consumo eléctrico mundial. Sólo en el año 2012, el consumo de potencia de los Centros de Datos creció un 63%, alcanzando los 38GW. En 2013 se estimó un crecimiento de otro 17%, hasta llegar a los 43GW. Además, los Centros de Datos son responsables de más del 2% del total de emisiones de dióxido de carbono a la atmósfera. Esta tesis doctoral se enfrenta al problema energético proponiendo técnicas proactivas y reactivas conscientes de la temperatura y de la energía, que contribuyen a tener Centros de Datos más eficientes. Este trabajo desarrolla modelos de energía y utiliza el conocimiento sobre la demanda energética de la carga de trabajo a ejecutar y de los recursos de computación y refrigeración del Centro de Datos para optimizar el consumo. Además, los Centros de Datos son considerados como un elemento crucial dentro del marco de la aplicación ejecutada, optimizando no sólo el consumo del Centro de Datos sino el consumo energético global de la aplicación. Los principales componentes del consumo en los Centros de Datos son la potencia de computación utilizada por los equipos de IT, y la refrigeración necesaria para mantener los servidores dentro de un rango de temperatura de trabajo que asegure su correcto funcionamiento. Debido a la relación cúbica entre la velocidad de los ventiladores y el consumo de los mismos, las soluciones basadas en el sobre-aprovisionamiento de aire frío al servidor generalmente tienen como resultado ineficiencias energéticas. Por otro lado, temperaturas más elevadas en el procesador llevan a un consumo de fugas mayor, debido a la relación exponencial del consumo de fugas con la temperatura. Además, las características de la carga de trabajo y las políticas de asignación de recursos tienen un impacto importante en los balances entre corriente de fugas y consumo de refrigeración. La primera gran contribución de este trabajo es el desarrollo de modelos de potencia y temperatura que permiten describes estos balances entre corriente de fugas y refrigeración; así como la propuesta de estrategias para minimizar el consumo del servidor por medio de la asignación conjunta de refrigeración y carga desde una perspectiva multivariable. Cuando escalamos a nivel del Centro de Datos, observamos un comportamiento similar en términos del balance entre corrientes de fugas y refrigeración. Conforme aumenta la temperatura de la sala, mejora la eficiencia de la refrigeración. Sin embargo, este incremente de la temperatura de sala provoca un aumento en la temperatura de la CPU y, por tanto, también del consumo de fugas. Además, la dinámica de la sala tiene un comportamiento muy desigual, no equilibrado, debido a la asignación de carga y a la heterogeneidad en el equipamiento de IT. La segunda contribución de esta tesis es la propuesta de técnicas de asigación conscientes de la temperatura y heterogeneidad que permiten optimizar conjuntamente la asignación de tareas y refrigeración a los servidores. Estas estrategias necesitan estar respaldadas por modelos flexibles, que puedan trabajar en tiempo real, para describir el sistema desde un nivel de abstracción alto. Dentro del ámbito de las aplicaciones de nueva generación, las decisiones tomadas en el nivel de aplicación pueden tener un impacto dramático en el consumo energético de niveles de abstracción menores, como por ejemplo, en el Centro de Datos. Es importante considerar las relaciones entre todos los agentes computacionales implicados en el problema, de forma que puedan cooperar para conseguir el objetivo común de reducir el coste energético global del sistema. La tercera contribución de esta tesis es el desarrollo de optimizaciones energéticas para la aplicación global por medio de la evaluación de los costes de ejecutar parte del procesado necesario en otros niveles de abstracción, que van desde los nodos hasta el Centro de Datos, por medio de técnicas de balanceo de carga. Como resumen, el trabajo presentado en esta tesis lleva a cabo contribuciones en el modelado y optimización consciente del consumo por fugas y la refrigeración de servidores; el modelado de los Centros de Datos y el desarrollo de políticas de asignación conscientes de la heterogeneidad; y desarrolla mecanismos para la optimización energética de aplicaciones de nueva generación desde varios niveles de abstracción. ABSTRACT Data centers are easily found in every sector of the worldwide economy. They consist of tens of thousands of servers, serving millions of users globally and 24-7. In the last years, e-Science applications such e-Health or Smart Cities have experienced a significant development. The need to deal efficiently with the computational needs of next-generation applications together with the increasing demand for higher resources in traditional applications has facilitated the rapid proliferation and growing of data centers. A drawback to this capacity growth has been the rapid increase of the energy consumption of these facilities. In 2010, data center electricity represented 1.3% of all the electricity use in the world. In year 2012 alone, global data center power demand grew 63% to 38GW. A further rise of 17% to 43GW was estimated in 2013. Moreover, data centers are responsible for more than 2% of total carbon dioxide emissions. This PhD Thesis addresses the energy challenge by proposing proactive and reactive thermal and energy-aware optimization techniques that contribute to place data centers on a more scalable curve. This work develops energy models and uses the knowledge about the energy demand of the workload to be executed and the computational and cooling resources available at data center to optimize energy consumption. Moreover, data centers are considered as a crucial element within their application framework, optimizing not only the energy consumption of the facility, but the global energy consumption of the application. The main contributors to the energy consumption in a data center are the computing power drawn by IT equipment and the cooling power needed to keep the servers within a certain temperature range that ensures safe operation. Because of the cubic relation of fan power with fan speed, solutions based on over-provisioning cold air into the server usually lead to inefficiencies. On the other hand, higher chip temperatures lead to higher leakage power because of the exponential dependence of leakage on temperature. Moreover, workload characteristics as well as allocation policies also have an important impact on the leakage-cooling tradeoffs. The first key contribution of this work is the development of power and temperature models that accurately describe the leakage-cooling tradeoffs at the server level, and the proposal of strategies to minimize server energy via joint cooling and workload management from a multivariate perspective. When scaling to the data center level, a similar behavior in terms of leakage-temperature tradeoffs can be observed. As room temperature raises, the efficiency of data room cooling units improves. However, as we increase room temperature, CPU temperature raises and so does leakage power. Moreover, the thermal dynamics of a data room exhibit unbalanced patterns due to both the workload allocation and the heterogeneity of computing equipment. The second main contribution is the proposal of thermal- and heterogeneity-aware workload management techniques that jointly optimize the allocation of computation and cooling to servers. These strategies need to be backed up by flexible room level models, able to work on runtime, that describe the system from a high level perspective. Within the framework of next-generation applications, decisions taken at this scope can have a dramatical impact on the energy consumption of lower abstraction levels, i.e. the data center facility. It is important to consider the relationships between all the computational agents involved in the problem, so that they can cooperate to achieve the common goal of reducing energy in the overall system. The third main contribution is the energy optimization of the overall application by evaluating the energy costs of performing part of the processing in any of the different abstraction layers, from the node to the data center, via workload management and off-loading techniques. In summary, the work presented in this PhD Thesis, makes contributions on leakage and cooling aware server modeling and optimization, data center thermal modeling and heterogeneityaware data center resource allocation, and develops mechanisms for the energy optimization for next-generation applications from a multi-layer perspective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When lipid synthesis is limited in HepG2 cells, apoprotein B100 (apoB100) is not secreted but rapidly degraded by the ubiquitin-proteasome pathway. To investigate apoB100 biosynthesis and secretion further, the physical and functional states of apoB100 destined for either degradation or lipoprotein assembly were studied under conditions in which lipid synthesis, proteasomal activity, and microsomal triglyceride transfer protein (MTP) lipid-transfer activity were varied. Cells were pretreated with a proteasomal inhibitor (which remained with the cells throughout the experiment) and radiolabeled for 15 min. During the chase period, labeled apoB100 remained associated with the microsomes. Furthermore, by crosslinking sec61β to apoB100, we showed that apoB100 remained close to the translocon at the same time apoB100–ubiquitin conjugates could be detected. When lipid synthesis and lipoprotein assembly/secretion were stimulated by adding oleic acid (OA) to the chase medium, apoB100 was deubiquitinated, and its interaction with sec61β was disrupted, signifying completion of translocation concomitant with the formation of lipoprotein particles. MTP participates in apoB100 translocation and lipoprotein assembly. In the presence of OA, when MTP lipid-transfer activity was inhibited at the end of pulse labeling, apoB100 secretion was abolished. In contrast, when the labeled apoB100 was allowed to accumulate in the cell for 60 min before adding OA and the inhibitor, apoB100 lipidation and secretion were no longer impaired. Overall, the data imply that during most of its association with the endoplasmic reticulum, apoB100 is close to or within the translocon and is accessible to both the ubiquitin-proteasome and lipoprotein-assembly pathways. Furthermore, MTP lipid-transfer activity seems to be necessary only for early translocation and lipidation events.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several microbial systems have been shown to yield advantageous mutations in slowly growing or nongrowing cultures. In one assay system, the stationary-phase mutation mechanism differs from growth-dependent mutation, demonstrating that the two are different processes. This system assays reversion of a lac frameshift allele on an F′ plasmid in Escherichia coli. The stationary-phase mutation mechanism at lac requires recombination proteins of the RecBCD double-strand-break repair system and the inducible error-prone DNA polymerase IV, and the mutations are mostly −1 deletions in small mononucleotide repeats. This mutation mechanism is proposed to occur by DNA polymerase errors made during replication primed by recombinational double-strand-break repair. It has been suggested that this mechanism is confined to the F plasmid. However, the cells that acquire the adaptive mutations show hypermutation of unrelated chromosomal genes, suggesting that chromosomal sites also might experience recombination protein-dependent stationary-phase mutation. Here we test directly whether the stationary-phase mutations in the bacterial chromosome also occur via a recombination protein- and pol IV-dependent mechanism. We describe an assay for chromosomal mutation in cells carrying the F′ lac. We show that the chromosomal mutation is recombination protein- and pol IV-dependent and also is associated with general hypermutation. The data indicate that, at least in these male cells, recombination protein-dependent stationary-phase mutation is a mechanism of general inducible genetic change capable of affecting genes in the bacterial chromosome.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We studied the global and local ℳ-Z relation based on the first data available from the CALIFA survey (150 galaxies). This survey provides integral field spectroscopy of the complete optical extent of each galaxy (up to 2−3 effective radii), with a resolution high enough to separate individual H II regions and/or aggregations. About 3000 individual H II regions have been detected. The spectra cover the wavelength range between [OII]3727 and [SII]6731, with a sufficient signal-to-noise ratio to derive the oxygen abundance and star-formation rate associated with each region. In addition, we computed the integrated and spatially resolved stellar masses (and surface densities) based on SDSS photometric data. We explore the relations between the stellar mass, oxygen abundance and star-formation rate using this dataset. We derive a tight relation between the integrated stellar mass and the gas-phase abundance, with a dispersion lower than the one already reported in the literature (σ_Δlog (O/H) = 0.07 dex). Indeed, this dispersion is only slightly higher than the typical error derived for our oxygen abundances. However, we found no secondary relation with the star-formation rate other than the one induced by the primary relation of this quantity with the stellar mass. The analysis for our sample of ~3000 individual H II   regions confirms (i) a local mass-metallicity relation and (ii) the lack of a secondary relation with the star-formation rate. The same analysis was performed with similar results for the specific star-formation rate. Our results agree with the scenario in which gas recycling in galaxies, both locally and globally, is much faster than other typical timescales, such like that of gas accretion by inflow and/or metal loss due to outflows. In essence, late-type/disk-dominated galaxies seem to be in a quasi-steady situation, with a behavior similar to the one expected from an instantaneous recycling/closed-box model.