875 resultados para The Cloud
Resumo:
Data analytic applications are characterized by large data sets that are subject to a series of processing phases. Some of these phases are executed sequentially but others can be executed concurrently or in parallel on clusters, grids or clouds. The MapReduce programming model has been applied to process large data sets in cluster and cloud environments. For developing an application using MapReduce there is a need to install/configure/access specific frameworks such as Apache Hadoop or Elastic MapReduce in Amazon Cloud. It would be desirable to provide more flexibility in adjusting such configurations according to the application characteristics. Furthermore the composition of the multiple phases of a data analytic application requires the specification of all the phases and their orchestration. The original MapReduce model and environment lacks flexible support for such configuration and composition. Recognizing that scientific workflows have been successfully applied to modeling complex applications, this paper describes our experiments on implementing MapReduce as subworkflows in the AWARD framework (Autonomic Workflow Activities Reconfigurable and Dynamic). A text mining data analytic application is modeled as a complex workflow with multiple phases, where individual workflow nodes support MapReduce computations. As in typical MapReduce environments, the end user only needs to define the application algorithms for input data processing and for the map and reduce functions. In the paper we present experimental results when using the AWARD framework to execute MapReduce workflows deployed over multiple Amazon EC2 (Elastic Compute Cloud) instances.
Resumo:
Empowered by virtualisation technology, cloud infrastructures enable the construction of flexi- ble and elastic computing environments, providing an opportunity for energy and resource cost optimisation while enhancing system availability and achieving high performance. A crucial re- quirement for effective consolidation is the ability to efficiently utilise system resources for high- availability computing and energy-efficiency optimisation to reduce operational costs and carbon footprints in the environment. Additionally, failures in highly networked computing systems can negatively impact system performance substantially, prohibiting the system from achieving its initial objectives. In this paper, we propose algorithms to dynamically construct and readjust vir- tual clusters to enable the execution of users’ jobs. Allied with an energy optimising mechanism to detect and mitigate energy inefficiencies, our decision-making algorithms leverage virtuali- sation tools to provide proactive fault-tolerance and energy-efficiency to virtual clusters. We conducted simulations by injecting random synthetic jobs and jobs using the latest version of the Google cloud tracelogs. The results indicate that our strategy improves the work per Joule ratio by approximately 12.9% and the working efficiency by almost 15.9% compared with other state-of-the-art algorithms.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Cloud computing has recently become very popular, and several bioinformatics applications exist already in that domain. The aim of this article is to analyse a current cloud system with respect to usability, benchmark its performance and compare its user friendliness with a conventional cluster job submission system. Given the current hype on the theme, user expectations are rather high, but current results show that neither the price/performance ratio nor the usage model is very satisfactory for large-scale embarrassingly parallel applications. However, for small to medium scale applications that require CPU time at certain peak times the cloud is a suitable alternative.
Resumo:
The emergence of powerful new technologies, the existence of large quantities of data, and increasing demands for the extraction of added value from these technologies and data have created a number of significant challenges for those charged with both corporate and information technology management. The possibilities are great, the expectations high, and the risks significant. Organisations seeking to employ cloud technologies and exploit the value of the data to which they have access, be this in the form of "Big Data" available from different external sources or data held within the organisation, in structured or unstructured formats, need to understand the risks involved in such activities. Data owners have responsibilities towards the subjects of the data and must also, frequently, demonstrate that they are in compliance with current standards, laws and regulations. This thesis sets out to explore the nature of the technologies that organisations might utilise, identify the most pertinent constraints and risks, and propose a framework for the management of data from discovery to external hosting that will allow the most significant risks to be managed through the definition, implementation, and performance of appropriate internal control activities.
Resumo:
The spread of sociocultural focuses and critical literacy studies, which offer an holistic perspective on communicative skills, has reached our country at a time when the commonest environment for writing is the internet, and information technologies have transformed writing with new channels, genres, forms of preparation and languages. Thanks to these changes, educational programmes now include new concepts of literacy related to knowledge and the use of digital environments. This paper explores the impact of introducing these new communicative environments to teaching written expression at secondary level and puts forward some ideas to link learning how to write to present communicative contexts and established practices. Without forgetting the achievements of recent decades, we need to bring about a series of changes to bring new learned writing practices to class and leave behind others we had championed as necessary when the goal was to move beyond exclusively linguistic or grammatical approaches
Resumo:
Millions of enterprises move their applications to a cloud every year. According to Forrester Research “the global cloud computing market will grow from a $40.7 billion in 2011 to $241 billion in 2020”. Due to increased interests and demand broad range of providers and solutions have appeared in the market. It is vital to be able to predict possible problems correctly and to classify and mitigate risks associated with the migration process. The study will show the main criteria that should be taken into consideration while making decision of moving enterprise applications to the cloud and choosing appropriate vendor. The main goal of the research is to identify main problems during the migration to a cloud and propose a solution for their prevention and mitigation of consequences in case of occurrence. The research provides an overview of existing cloud solutions and deployment models for enterprise applications. It identifies decision drivers of an applications migration to a cloud and potential risks and benefits associated with this. Finally, the best practices for the successful enterprise-to-cloud migration based on the case studies analysis are formulated.
Resumo:
Workshop at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
This study examines the practice of supply chain management problems and the perceived demand information distortion’s (the bullwhip effect) reduction with the interfirm information system, which is delivered as a cloud service to a company operating in the telecommunications industry. The purpose is to shed light in practice that do the interfirm information system have impact on the performance of the supply chain and in particularly the reduction of bullwhip effect. In addition, a holistic case study of the global telecommunications company's supply chain is presented and also the challenges it’s facing, and this study also proposes some measures to improve the situation. The theoretical part consists of the supply chain and its management, as well as increasing the efficiency and introducing the theories and related previous research. In addition, study presents performance metrics for the bullwhip effect detection and tracking. The theoretical part ends in presenting cloud -based business intelligence theoretical framework used in the background of this study. The research strategy is a qualitative case study, supported by quantitative data, which is collected from a telecommunication sector company's databases. Qualitative data were gathered mainly with two open interviews and the e-mail exchange during the development project. In addition, other materials from the company were collected during the project and the company's web site information was also used as the source. The data was collected to a specific case study database in order to increase reliability. The results show that the bullwhip effect can be reduced with the interfirm information system and with the use of CPFR and S&OP models and in particularly combining them to an integrated business planning. According to this study the interfirm information system does not, however, solve all of the supply chain and their effectiveness -related problems, because also the company’s processes and human activities have a major impact.
Resumo:
Scalar-flux budgets have been obtained from large-eddy simulations (LESs) of the cumulus-capped boundary layer. Parametrizations of the terms in the budgets are discussed, and two parametrizations for the transport term in the cloud layer are proposed. It is shown that these lead to two models for scalar transports by shallow cumulus convection. One is equivalent to the subsidence detrainment form of convective tendencies obtained from mass-flux parametrizations of cumulus convection. The second is a flux-gradient relationship that is similar in form to the non-local parametrizations of turbulent transports in the dry-convective boundary layer. Using the fluxes of liquid-water potential temperature and total water content from the LES, it is shown that both models are reasonable diagnostic relations between fluxes and the vertical gradients of the mean fields. The LESs used in this study are for steady-state convection and it is possible to treat the fluxes of conserved thermodynamic variables as independent, and ignore the effects of condensation. It is argued that a parametrization of cumulus transports in a model of the cumulus-capped boundary layer should also include an explicit representation of condensation. A simple parametrization of the liquid-water flux in terms of conserved variables is also derived.
Resumo:
Many producers of geographic information are now disseminating their data using open web service protocols, notably those published by the Open Geospatial Consortium. There are many challenges inherent in running robust and reliable services at reasonable cost. Cloud computing provides a new kind of scalable infrastructure that could address many of these challenges. In this study we implement a Web Map Service for raster imagery within the Google App Engine environment. We discuss the challenges of developing GIS applications within this framework and the performance characteristics of the implementation. Results show that the application scales well to multiple simultaneous users and performance will be adequate for many applications, although concerns remain over issues such as latency spikes. We discuss the feasibility of implementing services within the free usage quotas of Google App Engine and the possibility of extending the approaches in this paper to other GIS applications.
Resumo:
2011 is the centenary year of the short paper (Wilson,1911) first describing the cloud chamber, the device for visualising high-energy charged particles which earned the Scottish physicist Charles Thomas Rees (‘CTR’) Wilson the 1927 Nobel Prize for physics. His many achievements in atmospheric science, some of which have current relevance, are briefly reviewed here. CTR Wilson’s lifetime of scientific research work was principally in atmospheric electricity at the Cavendish Laboratory, Cambridge; he was Reader in Electrical Meteorology from 1918 and Jacksonian Professor from 1925 to 1935. However, he is immortalised in physics for his invention of the cloud chamber, because of its great significance as an early visualisation tool for particles such as cosmic rays1 (Galison, 1997). Sir Lawrence Bragg summarised its importance: