968 resultados para HPC in the Cloud


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Through the use of Cloud Foundry "stack" concept, a new isolation is provided to the application running on the PaaS. A new deployment feature that can easily scale on distributed system, both public and private clouds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nella fisica delle particelle, onde poter effettuare analisi dati, è necessario disporre di una grande capacità di calcolo e di storage. LHC Computing Grid è una infrastruttura di calcolo su scala globale e al tempo stesso un insieme di servizi, sviluppati da una grande comunità di fisici e informatici, distribuita in centri di calcolo sparsi in tutto il mondo. Questa infrastruttura ha dimostrato il suo valore per quanto riguarda l'analisi dei dati raccolti durante il Run-1 di LHC, svolgendo un ruolo fondamentale nella scoperta del bosone di Higgs. Oggi il Cloud computing sta emergendo come un nuovo paradigma di calcolo per accedere a grandi quantità di risorse condivise da numerose comunità scientifiche. Date le specifiche tecniche necessarie per il Run-2 (e successivi) di LHC, la comunità scientifica è interessata a contribuire allo sviluppo di tecnologie Cloud e verificare se queste possano fornire un approccio complementare, oppure anche costituire una valida alternativa, alle soluzioni tecnologiche esistenti. Lo scopo di questa tesi è di testare un'infrastruttura Cloud e confrontare le sue prestazioni alla LHC Computing Grid. Il Capitolo 1 contiene un resoconto generale del Modello Standard. Nel Capitolo 2 si descrive l'acceleratore LHC e gli esperimenti che operano a tale acceleratore, con particolare attenzione all’esperimento CMS. Nel Capitolo 3 viene trattato il Computing nella fisica delle alte energie e vengono esaminati i paradigmi Grid e Cloud. Il Capitolo 4, ultimo del presente elaborato, riporta i risultati del mio lavoro inerente l'analisi comparata delle prestazioni di Grid e Cloud.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud computing is a new development that is based on the premise that data and applications are stored centrally and can be accessed through the Internet. Thisarticle sets up a broad analysis of how the emergence of clouds relates to European competition law, network regulation and electronic commerce regulation, which we relate to challenges for the further development of cloud services in Europe: interoperability and data portability between clouds; issues relating to vertical integration between clouds and Internet Service Providers; and potential problems for clouds to operate on the European Internal Market. We find that these issues are not adequately addressed across the legal frameworks that we analyse, and argue for further research into how to better facilitate innovative convergent services such as cloud computing through European policy – especially in light of the ambitious digital agenda that the European Commission has set out.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of cloud computing is extending to all kind of systems, including the ones that are part of Critical Infrastructures, and measuring the reliability is becoming more difficult. Computing is becoming the 5th utility, in part thanks to the use of cloud services. Cloud computing is used now by all types of systems and organizations, including critical infrastructure, creating hidden inter-dependencies on both public and private cloud models. This paper investigates the use of cloud computing by critical infrastructure systems, the reliability and continuity of services risks associated with their use by critical systems. Some examples are presented of their use by different critical industries, and even when the use of cloud computing by such systems is not widely extended, there is a future risk that this paper presents. The concepts of macro and micro dependability and the model we introduce are useful for inter-dependency definition and for analyzing the resilience of systems that depend on other systems, specifically in the cloud model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La informática se está convirtiendo en la quinta utilidad (gas, agua, luz, teléfono) en parte debido al impacto de Cloud Computing en las mayorías de las organizaciones. Este uso de informática es usada por cada vez más tipos de sistemas, incluidos Sistemas Críticos. Esto tiene un impacto en la complejidad internad y la fiabilidad de los sistemas de la organización y los que se ofrecen a los clientes. Este trabajo investiga el uso de Cloud Computing por sistemas críticos, centrándose en las dependencias y especialmente en la fiabilidad de estos sistemas. Se han presentado algunos ejemplos de su uso, y aunque su utilización en sistemas críticos no está extendido, se presenta cual puede llegar a ser su impacto. El objetivo de este trabajo es primero definir un modelo que pueda representar de una forma cuantitativa las interdependencias en fiabilidad y interdependencia para las organizaciones que utilicen estos sistemas, y aplicar este modelo en un sistema crítico del campo de sanidad y mostrar sus resultados. Los conceptos de “macro-dependability” y “micro-dependability” son introducidos en el modelo para la definición de interdependencia y para analizar la fiabilidad de sistemas que dependen de otros sistemas. ABSTRACT With the increasing utilization of Internet services and cloud computing by most organizations (both private and public), it is clear that computing is becoming the 5th utility (along with water, electricity, telephony and gas). These technologies are used for almost all types of systems, and the number is increasing, including Critical Infrastructure systems. Even if Critical Infrastructure systems appear not to rely directly on cloud services, there may be hidden inter-dependencies. This is true even for private cloud computing, which seems more secure and reliable. The critical systems can began in some cases with a clear and simple design, but evolved as described by Egan to "rafted" networks. Because they are usually controlled by one or few organizations, even when they are complex systems, their dependencies can be understood. The organization oversees and manages changes. These CI systems have been affected by the introduction of new ICT models like global communications, PCs and the Internet. Even virtualization took more time to be adopted by Critical systems, due to their strategic nature, but once that these technologies have been proven in other areas, at the end they are adopted as well, for different reasons such as costs. A new technology model is happening now based on some previous technologies (virtualization, distributing and utility computing, web and software services) that are offered in new ways and is called cloud computing. The organizations are migrating more services to the cloud; this will have impact in their internal complexity and in the reliability of the systems they are offering to the organization itself and their clients. Not always this added complexity and associated risks to their reliability are seen. As well, when two or more CI systems are interacting, the risks of one can affect the rest, sharing the risks. This work investigates the use of cloud computing by critical systems, and is focused in the dependencies and reliability of these systems. Some examples are presented together with the associated risks. A framework is introduced for analysing the dependability and resilience of a system that relies on cloud services and how to improve them. As part of the framework, the concepts of micro and macro dependability are introduced to explain the internal and external dependability on services supplied by an external cloud. A pharmacovigilance model system has been used for framework validation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

At head of title: 87th Cong., 2d sess. Committee print.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Translation of: Meghadūta.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2015

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud storage has rapidly become a cornerstone of many businesses and has moved from an early adopters stage to an early majority, where we typically see explosive deployments. As companies rush to join the cloud revolution, it has become vital to create the necessary tools that will effectively protect users' data from unauthorized access. Nevertheless, sharing data between multiple users' under the same domain in a secure and efficient way is not trivial. In this paper, we propose Sharing in the Rain – a protocol that allows cloud users' to securely share their data based on predefined policies. The proposed protocol is based on Attribute-Based Encryption (ABE) and allows users' to encrypt data based on certain policies and attributes. Moreover, we use a Key-Policy Attribute-Based technique through which access revocation is optimized. More precisely, we show how to securely and efficiently remove access to a file, for a certain user that is misbehaving or is no longer part of a user group, without having to decrypt and re-encrypt the original data with a new key or a new policy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Elasticity is one of the most known capabilities related to cloud computing, being largely deployed reactively using thresholds. In this way, maximum and minimum limits are used to drive resource allocation and deallocation actions, leading to the following problem statements: How can cloud users set the threshold values to enable elasticity in their cloud applications? And what is the impact of the application’s load pattern in the elasticity? This article tries to answer these questions for iterative high performance computing applications, showing the impact of both thresholds and load patterns on application performance and resource consumption. To accomplish this, we developed a reactive and PaaS-based elasticity model called AutoElastic and employed it over a private cloud to execute a numerical integration application. Here, we are presenting an analysis of best practices and possible optimizations regarding the elasticity and HPC pair. Considering the results, we observed that the maximum threshold influences the application time more than the minimum one. We concluded that threshold values close to 100% of CPU load are directly related to a weaker reactivity, postponing resource reconfiguration when its activation in advance could be pertinent for reducing the application runtime.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents an analysis of the largest catalog to date of infrared spectra of massive young stellar objects in the Large Magellanic Cloud. Evidenced by their very different spectral features, the luminous objects span a range of evolutionary states from those most embedded in their natal molecular material to those that have dissipated and ionized their surroundings to form compact HII regions and photodissociation regions. We quantify the contributions of the various spectral features using the statistical method of principal component analysis. Using this analysis, we classify the YSO spectra into several distinct groups based upon their dominant spectral features: silicate absorption (S Group), silicate absorption and fine-structure line emission (SE), polycyclic aromatic hydrocarbon (PAH) emission (P Group), PAH and fine-structure line emission (PE), and only fine-structure line emission (E). Based upon the relative numbers of sources in each category, we are able to estimate the amount of time massive YSOs spend in each evolutionary stage. We find that approximately 50% of the sources have ionic fine-structure lines, indicating that a compact HII region forms about half-way through the YSO lifetime probed in our study. Of the 277 YSOs we collected spectra for, 41 have ice absorption features, indicating they are surrounded by cold ice-bearing dust particles. We have decomposed the shape of the ice features to probe the composition and thermal history of the ice. We find that most the CO2 ice is embedded a polar ice matrix that has been thermally processed by the embedded YSO. The amount of thermal processing may be correlated with the luminosity of the YSO. Using the Australia Telescope Compact Array, we imaged the dense gas around a subsample of our sources in the HII complexes N44, N105, N113, and N159 using HCO+ and HCN as dense gas tracers. We find that the molecular material in star forming environments is highly clumpy, with clumps that range from subparsec to ~2 parsecs in size and with masses between 10^2 to 10^4 solar masses. We find that there are varying levels of star formation in the clumps, with the lower-mass clumps tending to be without massive YSOs. These YSO-less clumps could either represent an earlier stage of clump to the more massive YSO-bearing ones or clumps that will never form a massive star. Clumps with massive YSOs at their centers have masses larger than those with massive YSOs at their edges, and we suggest that the difference is evolutionary: edge YSO clumps are more advanced than those with YSOs at their centers. Clumps with YSOs at their edges may have had a significant fraction of their mass disrupted or destroyed by the forming massive star. We find that the strength of the silicate absorption seen in YSO IR spectra feature is well-correlated with the on-source HCO+ and HCN flux densities, such that the strength of the feature is indicative of the embeddedness of the YSO. We estimate that ~40% of the entire spectral sample has strong silicate absorption features, implying that the YSOs are embedded in circumstellar material for about 40% of the time probed in our study.