193 resultados para HPC


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The evolution and maturation of Cloud Computing created an opportunity for the emergence of new Cloud applications. High-performance Computing, a complex problem solving class, arises as a new business consumer by taking advantage of the Cloud premises and leaving the expensive datacenter management and difficult grid development. Standing on an advanced maturing phase, today’s Cloud discarded many of its drawbacks, becoming more and more efficient and widespread. Performance enhancements, prices drops due to massification and customizable services on demand triggered an emphasized attention from other markets. HPC, regardless of being a very well established field, traditionally has a narrow frontier concerning its deployment and runs on dedicated datacenters or large grid computing. The problem with common placement is mainly the initial cost and the inability to fully use resources which not all research labs can afford. The main objective of this work was to investigate new technical solutions to allow the deployment of HPC applications on the Cloud, with particular emphasis on the private on-premise resources – the lower end of the chain which reduces costs. The work includes many experiments and analysis to identify obstacles and technology limitations. The feasibility of the objective was tested with new modeling, architecture and several applications migration. The final application integrates a simplified incorporation of both public and private Cloud resources, as well as HPC applications scheduling, deployment and management. It uses a well-defined user role strategy, based on federated authentication and a seamless procedure to daily usage with balanced low cost and performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Elasticity is one of the most known capabilities related to cloud computing, being largely deployed reactively using thresholds. In this way, maximum and minimum limits are used to drive resource allocation and deallocation actions, leading to the following problem statements: How can cloud users set the threshold values to enable elasticity in their cloud applications? And what is the impact of the application’s load pattern in the elasticity? This article tries to answer these questions for iterative high performance computing applications, showing the impact of both thresholds and load patterns on application performance and resource consumption. To accomplish this, we developed a reactive and PaaS-based elasticity model called AutoElastic and employed it over a private cloud to execute a numerical integration application. Here, we are presenting an analysis of best practices and possible optimizations regarding the elasticity and HPC pair. Considering the results, we observed that the maximum threshold influences the application time more than the minimum one. We concluded that threshold values close to 100% of CPU load are directly related to a weaker reactivity, postponing resource reconfiguration when its activation in advance could be pertinent for reducing the application runtime.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El modelo de investigación y pronóstico climático (WRF) es un sistema completamente funcional de modelado que permite realizar investigación atmosférica y predicción meteorológica. WRF fue desarrollado con énfasis en la eficiencia, portabilidad, facilidad de mantenimiento, escalabilidad y productividad, lo que ha permitido que sea implementado con éxito en una amplia variedad de equipos HPC. Por esta razón, el tamaño de los problemas a los que WRF da soporte ha incrementado, por lo que el entendimiento de la dependencia del WRF con los diversos elementos de clúster, como la CPU, interconexiones y librerías, son cruciales para permitir predicciones eficientes y de alta productividad. En este contexto, el presente manuscrito estudia la escalabilidad de WRF en un equipo HPC, tomando en consideración tres parámetros: número de CPUs y nodos, comunicaciones y librerías. Para esto, dos benchmarks son llevados a cabo sobre un clúster de alto rendimiento dotado de una red GigaEthernet, los cuales permiten establecer la relación entre escalabilidad y los tres parámetros estudiados, y particularmente demuestran la sensibilidad del WRF a la comunicación inter-nodo. Dicho factor es esencial para mantener la escalabilidad y el aumento de la productividad al añadir nodos en el clúster.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

After a decade evolving in the High Performance Computing arena, GPU-equipped supercomputers have con- quered the top500 and green500 lists, providing us unprecedented levels of computational power and memory bandwidth. This year, major vendors have introduced new accelerators based on 3D memory, like Xeon Phi Knights Landing by Intel and Pascal architecture by Nvidia. This paper reviews hardware features of those new HPC accelerators and unveils potential performance for scientific applications, with an emphasis on Hybrid Memory Cube (HMC) and High Bandwidth Memory (HBM) used by commercial products according to roadmaps already announced.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A High-Performance Computing job dispatcher is a critical software that assigns the finite computing resources to submitted jobs. This resource assignment over time is known as the on-line job dispatching problem in HPC systems. The fact the problem is on-line means that solutions must be computed in real-time, and their required time cannot exceed some threshold to do not affect the normal system functioning. In addition, a job dispatcher must deal with a lot of uncertainty: submission times, the number of requested resources, and duration of jobs. Heuristic-based techniques have been broadly used in HPC systems, at the cost of achieving (sub-)optimal solutions in a short time. However, the scheduling and resource allocation components are separated, thus generates a decoupled decision that may cause a performance loss. Optimization-based techniques are less used for this problem, although they can significantly improve the performance of HPC systems at the expense of higher computation time. Nowadays, HPC systems are being used for modern applications, such as big data analytics and predictive model building, that employ, in general, many short jobs. However, this information is unknown at dispatching time, and job dispatchers need to process large numbers of them quickly while ensuring high Quality-of-Service (QoS) levels. Constraint Programming (CP) has been shown to be an effective approach to tackle job dispatching problems. However, state-of-the-art CP-based job dispatchers are unable to satisfy the challenges of on-line dispatching, such as generate dispatching decisions in a brief period and integrate current and past information of the housing system. Given the previous reasons, we propose CP-based dispatchers that are more suitable for HPC systems running modern applications, generating on-line dispatching decisions in a proper time and are able to make effective use of job duration predictions to improve QoS levels, especially for workloads dominated by short jobs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

LHC experiments produce an enormous amount of data, estimated of the order of a few PetaBytes per year. Data management takes place using the Worldwide LHC Computing Grid (WLCG) grid infrastructure, both for storage and processing operations. However, in recent years, many more resources are available on High Performance Computing (HPC) farms, which generally have many computing nodes with a high number of processors. Large collaborations are working to use these resources in the most efficient way, compatibly with the constraints imposed by computing models (data distributed on the Grid, authentication, software dependencies, etc.). The aim of this thesis project is to develop a software framework that allows users to process a typical data analysis workflow of the ATLAS experiment on HPC systems. The developed analysis framework shall be deployed on the computing resources of the Open Physics Hub project and on the CINECA Marconi100 cluster, in view of the switch-on of the Leonardo supercomputer, foreseen in 2023.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern High-Performance Computing HPC systems are gradually increasing in size and complexity due to the correspondent demand of larger simulations requiring more complicated tasks and higher accuracy. However, as side effects of the Dennard’s scaling approaching its ultimate power limit, the efficiency of software plays also an important role in increasing the overall performance of a computation. Tools to measure application performance in these increasingly complex environments provide insights into the intricate ways in which software and hardware interact. The monitoring of the power consumption in order to save energy is possible through processors interfaces like Intel Running Average Power Limit RAPL. Given the low level of these interfaces, they are often paired with an application-level tool like Performance Application Programming Interface PAPI. Since several problems in many heterogeneous fields can be represented as a complex linear system, an optimized and scalable linear system solver algorithm can decrease significantly the time spent to compute its resolution. One of the most widely used algorithms deployed for the resolution of large simulation is the Gaussian Elimination, which has its most popular implementation for HPC systems in the Scalable Linear Algebra PACKage ScaLAPACK library. However, another relevant algorithm, which is increasing in popularity in the academic field, is the Inhibition Method. This thesis compares the energy consumption of the Inhibition Method and Gaussian Elimination from ScaLAPACK to profile their execution during the resolution of linear systems above the HPC architecture offered by CINECA. Moreover, it also collates the energy and power values for different ranks, nodes, and sockets configurations. The monitoring tools employed to track the energy consumption of these algorithms are PAPI and RAPL, that will be integrated with the parallel execution of the algorithms managed with the Message Passing Interface MPI.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Retinal pigment epithelium cells, along with tight junction (TJ) proteins, constitute the outer blood retinal barrier (BRB). Contradictory findings suggest a role for the outer BRB in the pathogenesis of diabetic retinopathy (DR). The aim of this study was to investigate whether the mechanisms involved in these alterations are sensitive to nitrosative stress, and if cocoa or epicatechin (EC) protects from this damage under diabetic (DM) milieu conditions. Cells of a human RPE line (ARPE-19) were exposed to high-glucose (HG) conditions for 24 hours in the presence or absence of cocoa powder containing 0.5% or 60.5% polyphenol (low-polyphenol cocoa [LPC] and high-polyphenol cocoa [HPC], respectively). Exposure to HG decreased claudin-1 and occludin TJ expressions and increased extracellular matrix accumulation (ECM), whereas levels of TNF-α and inducible nitric oxide synthase (iNOS) were upregulated, accompanied by increased nitric oxide levels. This nitrosative stress resulted in S-nitrosylation of caveolin-1 (CAV-1), which in turn increased CAV-1 traffic and its interactions with claudin-1 and occludin. This cascade was inhibited by treatment with HPC or EC through δ-opioid receptor (DOR) binding and stimulation, thereby decreasing TNF-α-induced iNOS upregulation and CAV-1 endocytosis. The TJ functions were restored, leading to prevention of paracellular permeability, restoration of resistance of the ARPE-19 monolayer, and decreased ECM accumulation. The detrimental effects on TJs in ARPE-19 cells exposed to DM milieu occur through a CAV-1 S-nitrosylation-dependent endocytosis mechanism. High-polyphenol cocoa or EC exerts protective effects through DOR stimulation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJETIVOS: avaliar e mensurar a sutura palatina mediana por meio de radiografias oclusais totais de maxila digitalizadas, antes e depois da sua disjunção. MÉTODOS: a amostra constou de 17 pacientes, com idades entre 7 e 22 anos. Radiografias oclusais totais da maxila foram executadas antes e depois da abertura da sutura palatina mediana, e digitalizadas em scanner HP Scanjet 6110 C com adaptador de transparências HPC 6261 6100 C, utilizando-se o programa Deskscan II. Para a avaliação e medição, foi utilizado o programa Radioimp® (Radiomemory, MG/Brasil). Na análise estatística, foram utilizados a média, o desvio-padrão, o coeficiente de variação e os testes "t" e ANOVA. CONCLUSÕES: após os resultados, foi possível concluir que (1) na região dos incisivos, houve uma abertura palatina mediana estatisticamente significativa; (2) houve abertura de diastema entre os incisivos centrais superiores em torno de 69,37% dos casos; (3) houve uma maior abertura da sutura palatina mediana na região a 10mm a partir da crista para posterior, em comparação com a região a 3mm para posterior do parafuso expansor; (4) na região a 3mm para posterior do parafuso expansor houve uma abertura de 35,97%, e na região a 10mm para posterior da crista uma abertura de 69,37%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this paper was to assess bacteriological quality of drinking water in a peri-urban area located in the Metropolitan Region of São Paulo, Brazil. A total of 89 water samples were collected from community plastic tanks and 177 water samples from wells were collected bimonthly, from September 2007 to November 2008, for evaluating bacteriological parameters including: Escherichia coli, Enterococcus and heterotrophic plate count (HPC). Clostridium perfringens was investigated in a subsample (40 samples from community plastic tank and 40 from wells). E. coli was present in 5 (5.6%) samples from community plastic tanks (2.0 - 5.1x10(4) MPN/100mL) and in 70 (39.5%) well samples (2.0 - 8.6x10(4) MPN/100mL). Thus, these samples were not in accordance with the Brazilian Regulation. Enterococcus was detected in 20 (22.5%) samples of the community plastic tanks (1 to 79 NC/100mL) and in 142 (80.2%) well samples (1 to >200 NC/100mL). C. perfringens was detected in 5 (12.5%) community plastic tanks samples and in 35 (87.5%) wells samples (2.2 to >16 MPN/100mL). HPC were above 500 CFU/mL in 5 (5.6%) waters from community plastic tanks. In wells samples, the HPC ranged from <1 to 1.6x10(4) CFU/mL. The residual chlorine did not attend the standard established in the drinking water legislation (0.2 mg/L), except in 20 (22.5%) samples. These results confirm the vulnerability of the water supply systems in this peri-urban area what is clearly a public health concern.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sixty samples of tissue fragments with lesions suggestive of tuberculosis from bovine abattoirs, kept in saturated solution of sodium borate, were subjected to four treatments: 4% NaOH (Petroff Method), 12 % H2SO4 and 1.5% HPC (1-Hexadecylpyridinium Chloride) decontamination, and physiological saline solution (control). The HPC method showed the lowest contamination rate (3%) when compared to control (88%, p<0.001), NaOH (33%, p<0.001) and H2SO4 (21.7%, p<0.002). Regarding the isolation success, the HPC method was better (40%) than the control (3%, p<0.001), NaOH (13%, p=0.001) and H2SO4 (1.7%, p<0.001) methods. These results indicate that HPC is an alternative to the Petroff method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the challenges in scientific visualization is to generate software libraries suitable for the large-scale data emerging from tera-scale simulations and instruments. We describe the efforts currently under way at SDSC and NPACI to address these challenges. The scope of the SDSC project spans data handling, graphics, visualization, and scientific application domains. Components of the research focus on the following areas: intelligent data storage, layout and handling, using an associated “Floor-Plan” (meta data); performance optimization on parallel architectures; extension of SDSC’s scalable, parallel, direct volume renderer to allow perspective viewing; and interactive rendering of fractional images (“imagelets”), which facilitates the examination of large datasets. These concepts are coordinated within a data-visualization pipeline, which operates on component data blocks sized to fit within the available computing resources. A key feature of the scheme is that the meta data, which tag the data blocks, can be propagated and applied consistently. This is possible at the disk level, in distributing the computations across parallel processors; in “imagelet” composition; and in feature tagging. The work reflects the emerging challenges and opportunities presented by the ongoing progress in high-performance computing (HPC) and the deployment of the data, computational, and visualization Grids.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The high affinity receptor for human granulocyte-macrophage colony-stimulating factor (GM-CSF) consists of a cytokine-specific alpha-subunit (hGMR alpha) and a common signal-transducing beta-subunit (hpc) that is shared with the interleukin-3 and -5 receptors, We have previously identified a constitutively active extracellular point mutant of hpc, I374N, that can confer factor independence on murine FDC-P1 cells but not BAF-B03 or CTLL-2 cells (Jenkins, B. J., D'Andrea, R. J., and Gonda, T. J. (1995) EMBO J. 14, 4276-4287), This restricted activity suggested the involvement of cell type-specific signaling molecules in the activation of this mutant. We report here that one such molecule is the mouse GMR alpha (mGMR alpha) subunit, since introduction of mGMR alpha, but not hGMR alpha, into BAF-B03 or CTLL-2 cells expressing the I374N mutant conferred factor independence, Experiments utilizing mouse/human chimeric GMR alpha subunits indicated that the species specificity lies in the extracellular domain of GMRa. Importantly, the requirement for mGMR alpha correlated with the ability of I374N (but not wild-type hpc) to constitutively associate with mGMRa. Expression of I374N in human factor-dependent UT7 cells also led to factor-independent proliferation, with concomitant up-regulation of hGMR alpha surface expression. Taken together, these findings suggest a critical role for association with GMR alpha in the constitutive activity of I374N.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several activating mutations have recently been described in the common beta subunit for the human interleukin(IL)-3, IL-5, and granulocyte-macrophage colony-stimulating factor (GM-CSF) receptors (h beta c), Two of these, FI Delta and 1374N, result, respectively, in a 37-amino acid duplication and an isoleucine-to-asparagine substitution in the extracellular domain. A third, V449E, leads to valine-to-glutamic acid substitution in the transmembrane domain. Previous studies have shown that when expressed in murine hemopoietic cells in vitro, the extracellular mutants can confer factor independence on only the granulocyte-macrophage lineage while the transmembrane mutant can do so to all cell types of the myeloid and erythroid compartments. To further study the signaling properties of the constitutively active hpc mutants, we have used novel murine hemopoietic cell lines, which we describe in this report. These lines, FDB1 and FDB2, proliferate in murine IL-3 and undergo granulocyte-macrophage differentiation in response to murine GM-CSF, We find that while the transmembrane mutant, V449E, confers factor-independent proliferation on these cell lines, the extracellular hpc mutants promote differentiation. Hence, in addition to their ability to confer factor independence on distinct cell types, transmembrane and extracellular activated h beta c mutants deliver distinct signals to the same cell type. Thus, the FDB cell lines, in combination with activated h beta c mutants, constitute a powerful new system to distinguish between signals that determine hemopoietic proliferation or differentiation. (C) 2000 by The American Society of Hematology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To date, several activating mutations have been discovered in the common signal-transducing subunit (h beta c) of the receptors for human granulocyte-macrophage colony-stimulating factor, interleukin-3, and interleukin-5. Two of these, Fl Delta and 1374N, result in a 37 amino acid duplication and a single amino acid substitution in the extracellular domain of h beta c, respectively. A third, V449E, results in a single amino acid substitution in the transmembrane domain, Previous studies comparing the activity of these mutants in different hematopoietic cell lines imply that the transmembrane and extracellular mutations act by different mechanisms and suggest the requirement for cell type-specific molecules in signalling. To characterize the ability of these mutant hpc subunits to mediate growth and differentiation of primary cells and hence investigate their oncogenic potential, we have expressed all three mutants in primary murine hematopoietic cells using retroviral transduction. It is shown that, whereas expression of either extracellular hpc mutant confers factor-independent proliferation and differentiation on cells of the neutrophil and monocyte lineages only, expression of the transmembrane mutant does so on these lineages as well as the eosinophil, basophil, megakaryocyte, and erythroid lineages, Factor-independent myeloid precursors expressing the transmembrane mutant display extended proliferation in liquid culture and in some cases yielded immortalized cell lines. (C) 1997 by The American Society of Hematology.