988 resultados para Los Alamos National Laboratory
Resumo:
A full assessment of para-virtualization is important, because without knowledge about the various overheads, users can not understand whether using virtualization is a good idea or not. In this paper we are very interested in assessing the overheads of running various benchmarks on bare-‐metal, as well as on para-‐virtualization. The idea is to see what the overheads of para-‐ virtualization are, as well as looking at the overheads of turning on monitoring and logging. The knowledge from assessing various benchmarks on these different systems will help a range of users understand the use of virtualization systems. In this paper we assess the overheads of using Xen, VMware, KVM and Citrix, see Table 1. These different virtualization systems are used extensively by cloud-‐users. We are using various Netlib1 benchmarks, which have been developed by the University of Tennessee at Knoxville (UTK), and Oak Ridge National Laboratory (ORNL). In order to assess these virtualization systems, we run the benchmarks on bare-‐metal, then on the para-‐virtualization, and finally we turn on monitoring and logging. The later is important as users are interested in Service Level Agreements (SLAs) used by the Cloud providers, and the use of logging is a means of assessing the services bought and used from commercial providers. In this paper we assess the virtualization systems on three different systems. We use the Thamesblue supercomputer, the Hactar cluster and IBM JS20 blade server (see Table 2), which are all servers available at the University of Reading. A functional virtualization system is multi-‐layered and is driven by the privileged components. Virtualization systems can host multiple guest operating systems, which run on its own domain, and the system schedules virtual CPUs and memory within each Virtual Machines (VM) to make the best use of the available resources. The guest-‐operating system schedules each application accordingly. You can deploy virtualization as full virtualization or para-‐virtualization. Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system, where the guest operating systems can run. No modifications are needed in the guest OS or application, e.g. the guest OS or application is not aware of the virtualized environment and runs normally. Para-‐virualization requires user modification of the guest operating systems, which runs on the virtual machines, e.g. these guest operating systems are aware that they are running on a virtual machine, and provide near-‐native performance. You can deploy both para-‐virtualization and full virtualization across various virtualized systems. Para-‐virtualization is an OS-‐assisted virtualization; where some modifications are made in the guest operating system to enable better performance. In this kind of virtualization, the guest operating system is aware of the fact that it is running on the virtualized hardware and not on the bare hardware. In para-‐virtualization, the device drivers in the guest operating system coordinate the device drivers of host operating system and reduce the performance overheads. The use of para-‐virtualization [0] is intended to avoid the bottleneck associated with slow hardware interrupts that exist when full virtualization is employed. It has revealed [0] that para-‐ virtualization does not impose significant performance overhead in high performance computing, and this in turn this has implications for the use of cloud computing for hosting HPC applications. The “apparent” improvement in virtualization has led us to formulate the hypothesis that certain classes of HPC applications should be able to execute in a cloud environment, with minimal performance degradation. In order to support this hypothesis, first it is necessary to define exactly what is meant by a “class” of application, and secondly it will be necessary to observe application performance, both within a virtual machine and when executing on bare hardware. A further potential complication is associated with the need for Cloud service providers to support Service Level Agreements (SLA), so that system utilisation can be audited.
Resumo:
The extent and thickness of the Arctic sea ice cover has decreased dramatically in the past few decades with minima in sea ice extent in September 2005 and 2007. These minima have not been predicted in the IPCC AR4 report, suggesting that the sea ice component of climate models should more realistically represent the processes controlling the sea ice mass balance. One of the processes poorly represented in sea ice models is the formation and evolution of melt ponds. Melt ponds accumulate on the surface of sea ice from snow and sea ice melt and their presence reduces the albedo of the ice cover, leading to further melt. Toward the end of the melt season, melt ponds cover up to 50% of the sea ice surface. We have developed a melt pond evolution theory. Here, we have incorporated this melt pond theory into the Los Alamos CICE sea ice model, which has required us to include the refreezing of melt ponds. We present results showing that the presence, or otherwise, of a representation of melt ponds has a significant effect on the predicted sea ice thickness and extent. We also present a sensitivity study to uncertainty in the sea ice permeability, number of thickness categories in the model representation, meltwater redistribution scheme, and pond albedo. We conclude with a recommendation that our melt pond scheme is included in sea ice models, and the number of thickness categories should be increased and concentrated at lower thicknesses.
Resumo:
The extent and thickness of the Arctic sea ice cover has decreased dramatically in the past few decades with minima in sea ice extent in September 2007 and 2011 and climate models did not predict this decline. One of the processes poorly represented in sea ice models is the formation and evolution of melt ponds. Melt ponds form on Arctic sea ice during the melting season and their presence affects the heat and mass balances of the ice cover, mainly by decreasing the value of the surface albedo by up to 20%. We have developed a melt pond model suitable for forecasting the presence of melt ponds based on sea ice conditions. This model has been incorporated into the Los Alamos CICE sea ice model, the sea ice component of several IPCC climate models. Simulations for the period 1990 to 2007 are in good agreement with observed ice concentration. In comparison to simulations without ponds, the September ice volume is nearly 40% lower. Sensitivity studies within the range of uncertainty reveal that, of the parameters pertinent to the present melt pond parameterization and for our prescribed atmospheric and oceanic forcing, variations of optical properties and the amount of snowfall have the strongest impact on sea ice extent and volume. We conclude that melt ponds will play an increasingly important role in the melting of the Arctic ice cover and their incorporation in the sea ice component of Global Circulation Models is essential for accurate future sea ice forecasts.
Resumo:
new rheology that explicitly accounts for the subcontinuum anisotropy of the sea ice cover is implemented into the Los Alamos sea ice model. This is in contrast to all models of sea ice included in global circulation models that use an isotropic rheology. The model contains one new prognostic variable, the local structure tensor, which quantifies the degree of anisotropy of the sea ice, and two parameters that set the time scale of the evolution of this tensor. The anisotropic rheology provides a subcontinuum description of the mechanical behavior of sea ice and accounts for a continuum scale stress with large shear to compression ratio and tensile stress component. Results over the Arctic of a stand-alone version of the model are presented and anisotropic model sensitivity runs are compared with a reference elasto-visco-plastic simulation. Under realistic forcing sea ice quickly becomes highly anisotropic over large length scales, as is observed from satellite imagery. The influence of the new rheology on the state and dynamics of the sea ice cover is discussed. Our reference anisotropic run reveals that the new rheology leads to a substantial change of the spatial distribution of ice thickness and ice drift relative to the reference standard visco-plastic isotropic run, with ice thickness regionally increased by more than 1 m, and ice speed reduced by up to 50%.
Resumo:
A multithickness sea ice model explicitly accounting for the ridging and sliding friction contributions to sea ice stress is developed. Both ridging and sliding contributions depend on the deformation type through functions adopted from the Ukita and Moritz kinematic model of floe interaction. In contrast to most previous work, the ice strength of a uniform ice sheet of constant ice thickness is taken to be proportional to the ice thickness raised to the 3/2 power, as is revealed in discrete element simulations by Hopkins. The new multithickness sea ice model for sea ice stress has been implemented into the Los Alamos “CICE” sea ice model code and is shown to improve agreement between model predictions and observed spatial distribution of sea ice thickness in the Arctic.
Resumo:
Over Arctic sea ice, pressure ridges and floe andmelt pond edges all introduce discrete obstructions to the flow of air or water past the ice and are a source of form drag. In current climate models form drag is only accounted for by tuning the air–ice and ice–ocean drag coefficients, that is, by effectively altering the roughness length in a surface drag parameterization. The existing approach of the skin drag parameter tuning is poorly constrained by observations and fails to describe correctly the physics associated with the air–ice and ocean–ice drag. Here, the authors combine recent theoretical developments to deduce the total neutral form drag coefficients from properties of the ice cover such as ice concentration, vertical extent and area of the ridges, freeboard and floe draft, and the size of floes and melt ponds. The drag coefficients are incorporated into the Los Alamos Sea Ice Model (CICE) and show the influence of the new drag parameterization on the motion and state of the ice cover, with the most noticeable being a depletion of sea ice over the west boundary of the Arctic Ocean and over the Beaufort Sea. The new parameterization allows the drag coefficients to be coupled to the sea ice state and therefore to evolve spatially and temporally. It is found that the range of values predicted for the drag coefficients agree with the range of values measured in several regions of the Arctic. Finally, the implications of the new form drag formulation for the spinup or spindown of the Arctic Ocean are discussed.
Resumo:
The sea ice edge presents a region of many feedback processes between the atmosphere, ocean, and sea ice (Maslowski et al.). Here the authors focus on the impact of on-ice atmospheric and oceanic flows at the sea ice edge. Mesoscale jet formation due to the Coriolis effect is well understood over sharp changes in surface roughness such as coastlines (Hunt et al.). This sharp change in surface roughness is experienced by the atmosphere and ocean encountering a compacted sea ice edge. This paper presents a study of a dynamic sea ice edge responding to prescribed atmospheric and oceanic jet formation. An idealized analytical model of sea ice drift is developed and compared to a sea ice climate model [the Los Alamos Sea Ice Model (CICE)] run on an idealized domain. The response of the CICE model to jet formation is tested at various resolutions. It is found that the formation of atmospheric jets at the sea ice edge increases the wind speed parallel to the sea ice edge and results in the formation of a sea ice drift jet in agreement with an observed sea ice drift jet (Johannessen et al.). The increase in ice drift speed is dependent upon the angle between the ice edge and wind and results in up to a 40% increase in ice transport along the sea ice edge. The possibility of oceanic jet formation and the resultant effect upon the sea ice edge is less conclusive. Observations and climate model data of the polar oceans have been analyzed to show areas of likely atmospheric jet formation, with the Fram Strait being of particular interest.
Resumo:
We present a modelling study of processes controlling the summer melt of the Arctic sea ice cover. We perform a sensitivity study and focus our interest on the thermodynamics at the ice–atmosphere and ice–ocean interfaces. We use the Los Alamos community sea ice model CICE, and additionally implement and test three new parametrization schemes: (i) a prognostic mixed layer; (ii) a three equation boundary condition for the salt and heat flux at the ice–ocean interface; and (iii) a new lateral melt parametrization. Recent additions to the CICE model are also tested, including explicit melt ponds, a form drag parametrization and a halodynamic brine drainage scheme. The various sea ice parametrizations tested in this sensitivity study introduce a wide spread in the simulated sea ice characteristics. For each simulation, the total melt is decomposed into its surface, bottom and lateral melt components to assess the processes driving melt and how this varies regionally and temporally. Because this study quantifies the relative importance of several processes in driving the summer melt of sea ice, this work can serve as a guide for future research priorities.
Resumo:
This paper deals with the effect of silica fume and styrene-butadiene latex (SBR) on the microstructure of the interfacial transition zone (ITZ) between Portland cement paste and aggregates (basalt). Scanning electron microscope (SEM) equipped with energy dispersive X-ray analysis system (EDX) was used to determine the ITZ thickness. In the plain concrete a marked ITZ around the aggregate particles (55 mu m) was observed, while in concretes with silica fume or latex SBR the ITZ was less pronounced (35-40 mu m). However, better results were observed in concretes with silica fume and latex SBR (20-25 mu m). (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
The study of planetary nebulae in the inner-disk and bulge gives important information on the chemical abundances of elements such as He, N, O, Ar, Ne, and on the evolution of these abundances, which is associated with the evolution of intermediate-mass stars and the chemical evolution of time Galaxy. We present accurate abundances of the elements He, N, 5, 0, Ar, and Ne for a sample of 54 planetary nebulae located towards the bulge of the Galaxy, for 33 of which the abundances are derived here for the first time. The abundances are obtained based on observations in the optical domain made at the National Laboratory for Astrophysics (LNA, Brazil). The data show a good agreement; with other results in the literature, in the sense that the distribution of the abundances is similar to that of those works.
Resumo:
Hepatitis C virus (HCV) infection frequently persists despite substantial virus-specific immune responses and the combination of pegylated interferon (INF)-alpha and ribavirin therapy. Major histocompatibility complex class I restricted CD8+ T cells are responsible for the control of viraemia in HCV infection, and several studies suggest protection against viral infection associated with specific HLAs. The reason for low rates of sustained viral response (SVR) in HCV patients remains unknown. Escape mutations in response to cytotoxic T lymphocyte are widely described; however, its influence in the treatment outcome is ill understood. Here, we investigate the differences in CD8 epitopes frequencies from the Los Alamos database between groups of patients that showed distinct response to pegylated alpha-INF with ribavirin therapy and test evidence of natural selection on the virus in those who failed treatment, using five maximum likelihood evolutionary models from PAML package. The group of sustained virological responders showed three epitopes with frequencies higher than Non-responders group, all had statistical support, and we observed evidence of selection pressure in the last group. No escape mutation was observed. Interestingly, the epitope VLSDFKTWL was 100% conserved in SVR group. These results suggest that the response to treatment can be explained by the increase in immune pressure, induced by interferon therapy, and the presence of those epitopes may represent an important factor in determining the outcome of therapy.
Resumo:
Protein degradation by the ubiquitin proteasome system releases large amounts of oligopeptides within cells. To investigate possible functions for these intracellularly generated oligopeptides, we fused them to a cationic transactivator peptide sequence using reversible disulfide bonds, introduced them into cells, and analyzed their effect on G protein-coupled receptor (GPCR) signal transduction. A mixture containing four of these peptides (20-80 mu M) significantly inhibited the increase in the extracellular acidification response triggered by angiotensin II (ang II) in CHO-S cells transfected with the ang II type 1 receptor (AT1R-CHO-S). Subsequently, either alone or in a mixture, these peptides increased luciferase gene transcription in AT1R-CHO-S cells stimulated with ang II and in HEK293 cells treated with isoproterenol. These peptides without transactivator failed to affect GPCR cellular responses. All four functional peptides were shown in vitro to competitively inhibit the degradation of a synthetic substrate by thimet oligopeptidase. Overexpression of thimet oligopeptidase in both CHO-S and HEK293 cells was sufficient to reduce luciferase activation triggered by a specific GPCR agonist. Moreover, using individual peptides as baits in affinity columns, several proteins involved in GPCR signaling were identified, including alpha-adaptin A and dynamin 1. These results suggest that before their complete degradation, intracellular peptides similar to those generated by proteasomes can actively affect cell signaling, probably representing additional bioactive molecules within cells.
Resumo:
To comprehend the recent Brookhaven National Laboratory experiment E788 on (4)(Lambda)He, we have outlined a simple theoretical framework. based on the independent-particle shell model, for the one-nucleon-induced nonmesonic weak decay spectra. Basically, the shapes of all the spectra are tailored by the kinematics of the corresponding phase space, depending very weakly on the dynamics, which is gauged here by the one-meson-exchange potential. In spite of the straightforwardness of the approach a good agreement with data is achieved. This might be an indication that the final-state-interactions and the two-nucleon induced processes are not very important in the decay of this hypernucleus. We have also found that the pi + K exchange potential with soft vertex-form-factor cutoffs (Lambda(pi) approximate to 0.7 GeV, Lambda(K) approximate to 0.9 GeV), is able to account simultaneously for the available experimental data related to Gamma(p) and Gamma(n) for (4)(Lambda)H, (4)(Lambda)H, and (5)(Lambda)H. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
A strain of Staphylococcus isolated by Dr. Fekete at the Sandia National Laboratory toxic metal dumping site in Sandia, New Mexico. has been found to reduce toxic Cr(VI) to the less toxic Cr(IlI) state. We have ascertained the environmental parameters for optimal bacterial growth and Cr(VI) reduction. This knowledge may be employed in a comprehensive bioremediation scheme designed to accelerate natural reparation of that Sandia ecosystem. In addition we have investigated the genetic and enzymatic basis for this Cr(VI) reducing ability. This information may allow us to create more effective bioremediation schemes based on the comprehensive knowledge of enzyme and gene function. Preliminary investigations have been carried out toward this end which may serve as the basis for a more thorough investigation.
Resumo:
The present work has as its basic purpose observing the principal administrative changes originated from the implementation of the Social Organization Project, inserted in the recent administrative reforms in Brazil, proposed from the Director Plan of the State Reform and approved by the National Congress in November of 1995. In the course of the text will be presented the main factors of the transformation from a bureaucratic public administration to a managerial public administration, specifically focusing the change from a Government Organization to a Social Organization. To reach the proposed objective, a case study of the Brazilian Association of Light Sincrotron Technology - ABTLuS, that represented the first Social Organization installed in Brazil, responsible for the management of the National Laboratory of Light Sincrotron - LNLS, under form of administration contract signed with Nationl Research Council - CNPq and Science and Technology Ministry - MCT. Initially, was developed the theoretical framework, based on the existent literature. Proceeding, field researches were realized in the cities of Campinas - SP, in Brasília - DF and in Rio de Janeiro - RJ. As a consequence of the accomplished work, it was possible to observe that the implementation of the SO administrative model brought more administrative flexibility for the qualified institution. This fact induced to gains of agility and efficiency, with more responsibilities, for the leaders as well as for the employees of ABTLuS. As for the other two important items consisted in the Director Plan, related with the cultural change (from bureaucratic to managerial) and with the social control (larger interaction in the relationship State-society), it is important to stand out the need of a larger time for evaluation, considering that the LNLS presents peculiar characteristics (subject approached in the work). The Social Organization ABTLuS counts with a little more than two years of administration contract, therefore the process is still in course.