987 resultados para public cloud


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main objective of this master’s thesis is to provide a comprehensive view to cloud computing and SaaS, and analyze how well CADM, a unit of Capgemini Finland Ltd., would fit to the cloud-based SaaS business. Another objective for this thesis is to investigate how public clouds would fit for CADM as a delivery model, if they would provide SaaS applications to their customers. This master’s thesis is executed by investigating characteristics of cloud computing and SaaS especially from application provider point of view. This is done by exploring what kinds of researches and analysis there have been done regarding these two phenomena during past few years. Then CADM’s current business model and operations are analyzed from SaaS’s and public cloud’s perspective. This analyzing part is conducted by using SWOT analysis which is widely used analytical tool when observing company’s strategic position and when figuring out possibilities how to improve company’s operations. The conducted analysis and observations reveals that CADM should pursue SaaS business as it could provide remarkable advantages and strengthen their position in current markets. However, pure SaaS model would not be the optimal solution for CADM because they do not have own product which could be transformed to SaaS model, and they lack of Infrastructure Management ability. Also public cloud would not be the most suitable delivery model for them if providing SaaS services. The main observation of this thesis is that CADM should adopt the SaaS model via Capgemini Immediate offering.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Through the use of Cloud Foundry "stack" concept, a new isolation is provided to the application running on the PaaS. A new deployment feature that can easily scale on distributed system, both public and private clouds.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Recientemente, el paradigma de la computación en la nube ha recibido mucho interés por parte tanto de la industria como del mundo académico. Las infraestructuras cloud públicas están posibilitando nuevos modelos de negocio y ayudando a reducir costes. Sin embargo, una compañía podría desear ubicar sus datos y servicios en sus propias instalaciones, o tener que atenerse a leyes de protección de datos. Estas circunstancias hacen a las infraestructuras cloud privadas ciertamente deseables, ya sea para complementar a las públicas o para sustituirlas por completo. Por desgracia, las carencias en materia de estándares han impedido que las soluciones para la gestión de infraestructuras privadas se hayan desarrollado adecuadamente. Además, la multitud de opciones disponibles ha creado en los clientes el miedo a depender de una tecnología concreta (technology lock-in). Una de las causas de este problema es la falta de alineación entre la investigación académica y los productos comerciales, ya que aquella está centrada en el estudio de escenarios idealizados sin correspondencia con el mundo real, mientras que éstos consisten en soluciones desarrolladas sin tener en cuenta cómo van a encajar con los estándares más comunes o sin preocuparse de hacer públicos sus resultados. Con objeto de resolver este problema, propongo un sistema de gestión modular para infraestructuras cloud privadas enfocado en tratar con las aplicaciones en lugar de centrarse únicamente en los recursos hardware. Este sistema de gestión sigue el paradigma de la computación autónoma y está diseñado en torno a un modelo de información sencillo, desarrollado para ser compatible con los estándares más comunes. Este modelo divide el entorno en dos vistas, que sirven para separar aquello que debe preocupar a cada actor involucrado del resto de información, pero al mismo tiempo permitiendo relacionar el entorno físico con las máquinas virtuales que se despliegan encima de él. En dicho modelo, las aplicaciones cloud están divididas en tres tipos genéricos (Servicios, Trabajos de Big Data y Reservas de Instancias), para que así el sistema de gestión pueda sacar partido de las características propias de cada tipo. El modelo de información está complementado por un conjunto de acciones de gestión atómicas, reversibles e independientes, que determinan las operaciones que se pueden llevar a cabo sobre el entorno y que es usado para hacer posible la escalabilidad en el entorno. También describo un motor de gestión encargado de, a partir del estado del entorno y usando el ya mencionado conjunto de acciones, la colocación de recursos. Está dividido en dos niveles: la capa de Gestores de Aplicación, encargada de tratar sólo con las aplicaciones; y la capa del Gestor de Infraestructura, responsable de los recursos físicos. Dicho motor de gestión obedece un ciclo de vida con dos fases, para así modelar mejor el comportamiento de una infraestructura real. El problema de la colocación de recursos es atacado durante una de las fases (la de consolidación) por un resolutor de programación entera, y durante la otra (la online) por un heurístico hecho ex-profeso. Varias pruebas han demostrado que este acercamiento combinado es superior a otras estrategias. Para terminar, el sistema de gestión está acoplado a arquitecturas de monitorización y de actuadores. Aquella estando encargada de recolectar información del entorno, y ésta siendo modular en su diseño y capaz de conectarse con varias tecnologías y ofrecer varios modos de acceso. ABSTRACT The cloud computing paradigm has raised in popularity within the industry and the academia. Public cloud infrastructures are enabling new business models and helping to reduce costs. However, the desire to host company’s data and services on premises, and the need to abide to data protection laws, make private cloud infrastructures desirable, either to complement or even fully substitute public oferings. Unfortunately, a lack of standardization has precluded private infrastructure management solutions to be developed to a certain level, and a myriad of diferent options have induced the fear of lock-in in customers. One of the causes of this problem is the misalignment between academic research and industry ofering, with the former focusing in studying idealized scenarios dissimilar from real-world situations, and the latter developing solutions without taking care about how they f t with common standards, or even not disseminating their results. With the aim to solve this problem I propose a modular management system for private cloud infrastructures that is focused on the applications instead of just the hardware resources. This management system follows the autonomic system paradigm, and is designed around a simple information model developed to be compatible with common standards. This model splits the environment in two views that serve to separate the concerns of the stakeholders while at the same time enabling the traceability between the physical environment and the virtual machines deployed onto it. In it, cloud applications are classifed in three broad types (Services, Big Data Jobs and Instance Reservations), in order for the management system to take advantage of each type’s features. The information model is paired with a set of atomic, reversible and independent management actions which determine the operations that can be performed over the environment and is used to realize the cloud environment’s scalability. From the environment’s state and using the aforementioned set of actions, I also describe a management engine tasked with the resource placement. It is divided in two tiers: the Application Managers layer, concerned just with applications; and the Infrastructure Manager layer, responsible of the actual physical resources. This management engine follows a lifecycle with two phases, to better model the behavior of a real infrastructure. The placement problem is tackled during one phase (consolidation) by using an integer programming solver, and during the other (online) with a custom heuristic. Tests have demonstrated that this combined approach is superior to other strategies. Finally, the management system is paired with monitoring and actuators architectures. The former able to collect the necessary information from the environment, and the later modular in design and capable of interfacing with several technologies and ofering several access interfaces.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ara que ja ningú qüestiona les publicacions digitals, l'accés obert o els repositoris de tipus institucional o temàtic, cal co-mençar a treballar en els temes de preservació. Som responsables d'assegurar l'accés i l'ús futurs dels documents digitals, hagin nascut en format digital o siguin fruit dels projectes de digitalització. En aquest article es parla sobre la preservació digital i els tipus de continguts, així com els conceptes, tipologies, costos i usos de l'anomenat «núvol». També es mostra un exemple de núvol privat, la preservació de tesis doctorals del TDX i es comenta una experiència de núvol públic, el DuraCloud.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The European Space Agency's Gaia mission will create the largest and most precise three dimensional chart of our galaxy (the Milky Way), by providing unprecedented position, parallax, proper motion, and radial velocity measurements for about one billion stars. The resulting catalogue will be made available to the scientific community and will be analyzed in many different ways, including the production of a variety of statistics. The latter will often entail the generation of multidimensional histograms and hypercubes as part of the precomputed statistics for each data release, or for scientific analysis involving either the final data products or the raw data coming from the satellite instruments. In this paper we present and analyze a generic framework that allows the hypercube generation to be easily done within a MapReduce infrastructure, providing all the advantages of the new Big Data analysis paradigmbut without dealing with any specific interface to the lower level distributed system implementation (Hadoop). Furthermore, we show how executing the framework for different data storage model configurations (i.e. row or column oriented) and compression techniques can considerably improve the response time of this type of workload for the currently available simulated data of the mission. In addition, we put forward the advantages and shortcomings of the deployment of the framework on a public cloud provider, benchmark against other popular solutions available (that are not always the best for such ad-hoc applications), and describe some user experiences with the framework, which was employed for a number of dedicated astronomical data analysis techniques workshops.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The European Space Agency's Gaia mission will create the largest and most precise three dimensional chart of our galaxy (the Milky Way), by providing unprecedented position, parallax, proper motion, and radial velocity measurements for about one billion stars. The resulting catalogue will be made available to the scientific community and will be analyzed in many different ways, including the production of a variety of statistics. The latter will often entail the generation of multidimensional histograms and hypercubes as part of the precomputed statistics for each data release, or for scientific analysis involving either the final data products or the raw data coming from the satellite instruments. In this paper we present and analyze a generic framework that allows the hypercube generation to be easily done within a MapReduce infrastructure, providing all the advantages of the new Big Data analysis paradigmbut without dealing with any specific interface to the lower level distributed system implementation (Hadoop). Furthermore, we show how executing the framework for different data storage model configurations (i.e. row or column oriented) and compression techniques can considerably improve the response time of this type of workload for the currently available simulated data of the mission. In addition, we put forward the advantages and shortcomings of the deployment of the framework on a public cloud provider, benchmark against other popular solutions available (that are not always the best for such ad-hoc applications), and describe some user experiences with the framework, which was employed for a number of dedicated astronomical data analysis techniques workshops.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tämän diplomityön tavoitteena oli selvittää hyödyttääkö Stora Enso Metsää tietojärjestelmien siirto perinteisistä konesalipalveluista pilvipalveluihin. Stora Enso Metsällä on paljon erilaisia suunnitteluun liittyviä eräajoja. Joitakin niistä ajetaan vain muutamia kertoja vuodessa kuten tehtaiden puuntarve, toisia muutaman kerran kuussa kuten kuljetusten malliajot tai muutaman kerran viikossa ajettava korjuun suunnittelu. Niissä tapauksissa palvelimet voidaan käynnistää erikseen ja käyttää niitä vain silloin, kun niitä oikeasti tarvitaan. Työn lopputuloksena havaittiin, että pilvipalveluiden käyttöönotto tuo kustannussäästöjä ja palveluiden hallintaan joustavuutta. Itsepalveluna toteutettuna palvelimia voidaan hallinnoida joustavasti kustannusten säästämiseksi. Pilvipalveluilla voidaan nopeuttaa projektien läpimenoa ja kohdentaa käyttökatkot tarkemmin koska siihen ei välttämättä tarvita toimittajan työtä lainkaan. Loppujen lopuksi asiakkaan on erittäin vaikea tietää kuinka paljon kustannuksia on jaettu eri tavalla eri palvelujen välillä.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The progresses of the Internet and telecommunications have been changing the concepts of Information Technology IT, especially with regard to outsourcing services, where organizations seek cost-cutting and a better focus on the business. Along with the development of that outsourcing, a new model named Cloud Computing (CC) evolved. It proposes to migrate to the Internet both data processing and information storing. Among the key points of Cloud Computing are included cost-cutting, benefits, risks and the IT paradigms changes. Nonetheless, the adoption of that model brings forth some difficulties to decision-making, by IT managers, mainly with regard to which solutions may go to the cloud, and which service providers are more appropriate to the Organization s reality. The research has as its overall aim to apply the AHP Method (Analytic Hierarchic Process) to decision-making in Cloud Computing. There to, the utilized methodology was the exploratory kind and a study of case applied to a nationwide organization (Federation of Industries of RN). The data collection was performed through two structured questionnaires answered electronically by IT technicians, and the company s Board of Directors. The analysis of the data was carried out in a qualitative and comparative way, and we utilized the software to AHP method called Web-Hipre. The results we obtained found the importance of applying the AHP method in decision-making towards the adoption of Cloud Computing, mainly because on the occasion the research was carried out the studied company already showed interest and necessity in adopting CC, considering the internal problems with infrastructure and availability of information that the company faces nowadays. The organization sought to adopt CC, however, it had doubt regarding the cloud model and which service provider would better meet their real necessities. The application of the AHP, then, worked as a guiding tool to the choice of the best alternative, which points out the Hybrid Cloud as the ideal choice to start off in Cloud Computing. Considering the following aspects: the layer of Infrastructure as a Service IaaS (Processing and Storage) must stay partly on the Public Cloud and partly in the Private Cloud; the layer of Platform as a Service PaaS (Software Developing and Testing) had preference for the Private Cloud, and the layer of Software as a Service - SaaS (Emails/Applications) divided into emails to the Public Cloud and applications to the Private Cloud. The research also identified the important factors to hiring a Cloud Computing provider

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Reproducible research in scientic work ows is often addressed by tracking the provenance of the produced results. While this approach allows inspecting intermediate and nal results, improves understanding, and permits replaying a work ow execution, it does not ensure that the computational environment is available for subsequent executions to reproduce the experiment. In this work, we propose describing the resources involved in the execution of an experiment using a set of semantic vocabularies, so as to conserve the computational environment. We dene a process for documenting the work ow application, management system, and their dependencies based on 4 domain ontologies. We then conduct an experimental evaluation sing a real work ow application on an academic and a public Cloud platform. Results show that our approach can reproduce an equivalent execution environment of a predened virtual machine image on both computing platforms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Reproducible research in scientific workflows is often addressed by tracking the provenance of the produced results. While this approach allows inspecting intermediate and final results, improves understanding, and permits replaying a workflow execution, it does not ensure that the computational environment is available for subsequent executions to reproduce the experiment. In this work, we propose describing the resources involved in the execution of an experiment using a set of semantic vocabularies, so as to conserve the computational environment. We define a process for documenting the workflow application, management system, and their dependencies based on 4 domain ontologies. We then conduct an experimental evaluation using a real workflow application on an academic and a public Cloud platform. Results show that our approach can reproduce an equivalent execution environment of a predefined virtual machine image on both computing platforms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Increasingly, the Information Technology (IT) has been used to sustain the business strategies, causing increased its relevance. Therefore IT governance is seen as one of the priorities of organizations at the time. The search for strategic alignment between business and IT is debated as a factor for business success, but even with that importance, usually the main business managers are reluctant to take responsibility for decisions involving IT, mainly due to the complexity of your infrastructure. Since cloud computing is being seen as an element capable of assisting in the implementation of organizational strategies, because their characteristics enable greater efficiency and agility in IT, and is considered as a new computing paradigm. The main objective of the analyze the relationship between IT governance arrangements and strategic alignment with the infrastructure as a service (IaaS) of public cloud computing. Therefore, an exploratory, descriptive and inferential was developed, with approach to the problem of quantitatively research, with descriptive survey method and cross section. An electronic questionnaire that was applied to the ISACA chapters Associates of São Paulo and the Distrito Federal, totaling 164 respondents was used. The instrument used based on the theories of Weill and Ross (2006) for array of IT governance arrangement; Henderson and Venkatraman (1993) and Luftman (2000), for maturity of the strategic alignment model; and NIST (2011 b), ITGI (2007) and CSA (2010) for infrastructure maturity as a service (IaaS) public in its essential characteristics. As regards the main results, this research proved that with public IaaS decision-making structures have changed, with a greater participation of senior executives in all five key IT decisions (IT governance arrangement array) including more technical decisions as architecture and IT infrastructure. With increased participation of senior executives the decrease was also observed in the share of IT specialists, characterizing the decision process with the duopoly archetype (shared decision). With regard to strategic alignment, it was observed that it changes with cloud computing, and organizations with public IaaS, a maturity of strategic alignment with statistically significant and greater difference when compared to organizations without IaaS. The maturity of public IaaS is at the intermediate level (level 3 - "defined process"), with the elasticity and measurement achieved level 4 - "managed and measurable" It was also possible to infer in organizations with public IaaS, there are positive correlations between the key decisions and the maturity of IaaS, especially at the beginning, architecture and infrastructure, and the archetypes involving senior executives and IT specialists. In the correlation between the maturity and mature strategic alignment of public IaaS therefore the higher the strategic alignment, the greater the maturity of the public IaaS and vice versa.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

La informática se está convirtiendo en la quinta utilidad (gas, agua, luz, teléfono) en parte debido al impacto de Cloud Computing en las mayorías de las organizaciones. Este uso de informática es usada por cada vez más tipos de sistemas, incluidos Sistemas Críticos. Esto tiene un impacto en la complejidad internad y la fiabilidad de los sistemas de la organización y los que se ofrecen a los clientes. Este trabajo investiga el uso de Cloud Computing por sistemas críticos, centrándose en las dependencias y especialmente en la fiabilidad de estos sistemas. Se han presentado algunos ejemplos de su uso, y aunque su utilización en sistemas críticos no está extendido, se presenta cual puede llegar a ser su impacto. El objetivo de este trabajo es primero definir un modelo que pueda representar de una forma cuantitativa las interdependencias en fiabilidad y interdependencia para las organizaciones que utilicen estos sistemas, y aplicar este modelo en un sistema crítico del campo de sanidad y mostrar sus resultados. Los conceptos de “macro-dependability” y “micro-dependability” son introducidos en el modelo para la definición de interdependencia y para analizar la fiabilidad de sistemas que dependen de otros sistemas. ABSTRACT With the increasing utilization of Internet services and cloud computing by most organizations (both private and public), it is clear that computing is becoming the 5th utility (along with water, electricity, telephony and gas). These technologies are used for almost all types of systems, and the number is increasing, including Critical Infrastructure systems. Even if Critical Infrastructure systems appear not to rely directly on cloud services, there may be hidden inter-dependencies. This is true even for private cloud computing, which seems more secure and reliable. The critical systems can began in some cases with a clear and simple design, but evolved as described by Egan to "rafted" networks. Because they are usually controlled by one or few organizations, even when they are complex systems, their dependencies can be understood. The organization oversees and manages changes. These CI systems have been affected by the introduction of new ICT models like global communications, PCs and the Internet. Even virtualization took more time to be adopted by Critical systems, due to their strategic nature, but once that these technologies have been proven in other areas, at the end they are adopted as well, for different reasons such as costs. A new technology model is happening now based on some previous technologies (virtualization, distributing and utility computing, web and software services) that are offered in new ways and is called cloud computing. The organizations are migrating more services to the cloud; this will have impact in their internal complexity and in the reliability of the systems they are offering to the organization itself and their clients. Not always this added complexity and associated risks to their reliability are seen. As well, when two or more CI systems are interacting, the risks of one can affect the rest, sharing the risks. This work investigates the use of cloud computing by critical systems, and is focused in the dependencies and reliability of these systems. Some examples are presented together with the associated risks. A framework is introduced for analysing the dependability and resilience of a system that relies on cloud services and how to improve them. As part of the framework, the concepts of micro and macro dependability are introduced to explain the internal and external dependability on services supplied by an external cloud. A pharmacovigilance model system has been used for framework validation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

It has been shown that cloud computing brings cost benefits and promotes efficiency in the operations of the organizations, no matter what their type or size. However, few public organizations are benefiting from this new paradigm shift in the way the organizations consume and manage computational resources. The objective of this thesis is to analyze both internal and external factors that may influence the adoption of cloud computing by public organizations and propose possible strategies that can assist these organizations in their path to cloud usage. In order to achieve this objective, a SWOT analysis has been conducted, detecting internal factors (strengths and weaknesses) and external factors (opportunities and threats) that can influence the adoption of a governmental cloud. With the application of a TOWS matrix, by combining the internal and external factors, a list of possible strategies have been formulated to be used as a guide to decision-making related to the transition to a cloud environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new cloud-point extraction and preconcentration method, using a cationic, surfactant, Aliquat-336 (tricaprylyl-methy;ammonium chloride), his-been developed for the determination of cyanobacterial toxins, microcystins, in natural waters. Sodium sulfate was used to induce phase separation at 25 degreesC. The phase behavior of Aliquat-336 with respect to concentration of Na2SO4 was studied. The cloud-point system revealed a very high phase volume ratio compared to other established systems of nonionic, anionic, and cationic surfactants: At pH 6-7, it showed an outstanding selectivity in ahalyte extraction for anionic species. Only MC-LR and MC-YR, which are known to be predominantly anionic, were extracted (with averaged recoveries of 113.9 +/- 9% and 87.1 +/- 7%, respectively). MC-RR, which is likely to be amphoteric at the above pH range, was. not cle tectable in.the extract. Coupled to HPLC/UV separation and detection, the cloud-point extraction method (with 2.5 mM Aliquat-336 and 75 mM Na2SO4 at 25 degreesC) offered detection limits of 150 +/- 7 and 470 +/- 72 pg/mL for MC-LR and MC-YR, respectively, in 25 mL of deionized water. Repeatability of the method was 7.6% for MC-LR and 7.3% for MC-YR: The cloud-point extraction process can be. completed within 10-15 min with no cleanup steps required. Applicability of the new method to the determination of microcystins in real samples was demonstrated using natural surface waters, collected from a local river and a local duck pond spiked with realistic. concentrations of microcystins. Effects of salinity and organic matter (TOC) content in the water sample on the extraction efficiency were also studied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para a obtenção do grau de Mestre em Engenharia Informática e de Computadores