34 resultados para cloud computing resources
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
Projeto para obtenção do grau de Mestre em Engenharia Informática e de Computadores
Resumo:
Object-oriented programming languages presently are the dominant paradigm of application development (e. g., Java,. NET). Lately, increasingly more Java applications have long (or very long) execution times and manipulate large amounts of data/information, gaining relevance in fields related with e-Science (with Grid and Cloud computing). Significant examples include Chemistry, Computational Biology and Bio-informatics, with many available Java-based APIs (e. g., Neobio). Often, when the execution of such an application is terminated abruptly because of a failure (regardless of the cause being a hardware of software fault, lack of available resources, etc.), all of its work already performed is simply lost, and when the application is later re-initiated, it has to restart all its work from scratch, wasting resources and time, while also being prone to another failure and may delay its completion with no deadline guarantees. Our proposed solution to address these issues is through incorporating mechanisms for checkpointing and migration in a JVM. These make applications more robust and flexible by being able to move to other nodes, without any intervention from the programmer. This article provides a solution to Java applications with long execution times, by extending a JVM (Jikes research virtual machine) with such mechanisms. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
Trabalho Final de Mestrado para a obtenção do grau de Mestre em Engenharia Informática e de Computadores
Resumo:
Cloud SLAs compensate customers with credits when average availability drops below certain levels. This is too inflexible because consumers lose non-measurable amounts of performance being only compensated later, in next charging cycles. We propose to schedule virtual machines (VMs), driven by range-based non-linear reductions of utility, different for classes of users and across different ranges of resource allocations: partial utility. This customer-defined metric, allows providers transferring resources between VMs in meaningful and economically efficient ways. We define a comprehensive cost model incorporating partial utility given by clients to a certain level of degradation, when VMs are allocated in overcommitted environments (Public, Private, Community Clouds). CloudSim was extended to support our scheduling model. Several simulation scenarios with synthetic and real workloads are presented, using datacenters with different dimensions regarding the number of servers and computational capacity. We show the partial utility-driven driven scheduling allows more VMs to be allocated. It brings benefits to providers, regarding revenue and resource utilization, allowing for more revenue per resource allocated and scaling well with the size of datacenters when comparing with an utility-oblivious redistribution of resources. Regarding clients, their workloads’ execution time is also improved, by incorporating an SLA-based redistribution of their VM’s computational power.
Resumo:
Data analytic applications are characterized by large data sets that are subject to a series of processing phases. Some of these phases are executed sequentially but others can be executed concurrently or in parallel on clusters, grids or clouds. The MapReduce programming model has been applied to process large data sets in cluster and cloud environments. For developing an application using MapReduce there is a need to install/configure/access specific frameworks such as Apache Hadoop or Elastic MapReduce in Amazon Cloud. It would be desirable to provide more flexibility in adjusting such configurations according to the application characteristics. Furthermore the composition of the multiple phases of a data analytic application requires the specification of all the phases and their orchestration. The original MapReduce model and environment lacks flexible support for such configuration and composition. Recognizing that scientific workflows have been successfully applied to modeling complex applications, this paper describes our experiments on implementing MapReduce as subworkflows in the AWARD framework (Autonomic Workflow Activities Reconfigurable and Dynamic). A text mining data analytic application is modeled as a complex workflow with multiple phases, where individual workflow nodes support MapReduce computations. As in typical MapReduce environments, the end user only needs to define the application algorithms for input data processing and for the map and reduce functions. In the paper we present experimental results when using the AWARD framework to execute MapReduce workflows deployed over multiple Amazon EC2 (Elastic Compute Cloud) instances.
Resumo:
Neste trabalho é apresentado um Sistema de Informação (SI) cujo objetivo é contribuir para a colaboração entre VE possibilitando a sua entrada no mercado de energia elétrica, permiitindo deste modo que o proprietário do VE seja compensado. Esta compensação incidiria em medidas de minimização do investimento inicial na compra do VE e na criação de um modelo de armazenamento de energia mais sustentável. A este SI chamou-se de Broker Colaborativo ou simplesmente Broker e deverá permitir ao proprietário do VE criar um perfil com a sua informação pessoal, com informação do VE que possui e com informação relativa à colaboração que pretende efetuar, ou seja, locais e percentagem da bateria que pretende usar na colaboração. Para assegurar a entrada no mercado de energia torna-se necessário agrupar e filtrar um número significativo de VE, sendo essa responsabilidade atribuída ao Broker. Sendo a otimização do consumo de energia um fator importante neste trabalho, a implementação do Broker foi efetuada sobre o paradigma de Cloud Computing (será utilizada a designação abreviada de Cloud) onde os recursos energéticos são partilhados. A Clod possuiu uma grande capacidade de escalonamento onde se poderão criar os recursos à medida da necessidade. A utilização da Cloud permite que as especificações iniciais das máquinas virtuais, necessárias para alojar a base de dados ou os ficheiros do Broker, possam ser configuradas conforme o número de utilizadores do sistema. Assim, não se desperdiçam recursos que não seriam utilizados. Devido a estes aspetos dinâmicos, que estão intrínsecos ao fator humano, o Broker foi implementado utilizando um módulo de integração com a rede social Facebook esperando-se desta forma, que a colaboração entre diferentes VE seja difundida através dos amigos e conhecidos de cada utilizador da colaboração. Sendo assim, este trabalho pretende criar um sistema de informação onde os proprietários de VE possam juntar-se e criar uma colaboração possibilitando a sua entrada no mercado da energia elétrica. Essa colaboração será alargada recorrendo a uma rede social e compensada através de créditos. É abordado o tema das plataformas de Cloud como uma plataforma sustentável e vantajosa para o desenvolvimento de sistemas com grande potencial de crescimento.
Resumo:
As tecnologias de informação representam um pilar fundamental nas organizações como sustento do negócio através de infraestruturas dedicadas sendo que com o evoluir do crescimento no centro de dados surgem desafios relativamente a escalabilidade, tolerância à falha, desempenho, alocação de recursos, segurança nos acessos, reposição de grandes quantidades de informação e eficiência energética. Com a adoção de tecnologias baseadas em cloud computing aplica-se um modelo de recursos partilhados de modo a consolidar a infraestrutura e endereçar os desafios anteriormente descritos. As tecnologias de virtualização têm como objetivo reduzir a infraestrutura levantando novas considerações ao nível das redes locais e de dados, segurança, backup e reposição da informação devido á dinâmica de um ambiente virtualizado. Em centros de dados esta abordagem pode representar um nível de consolidação elevado, permitindo reduzir servidores físicos, portas de rede, cablagem, armazenamento, espaço, energia e custo, assegurando os níveis de desempenho. Este trabalho permite definir uma estratégia de consolidação do centro de dados em estudo que permita a tolerância a falhas, provisionamento de novos serviços com tempo reduzido, escalabilidade para mais serviços, segurança nas redes Delimitarized Zone (DMZ), e backup e reposição de dados com impacto reduzido nos recursos, permitindo altos débitos e rácios de consolidação do armazenamento. A arquitetura proposta visa implementar a estratégia com tecnologias otimizadas para o cloud computing. Foi realizado um estudo tendo como base a análise de um centro de dados através da aplicação VMWare Capacity Planner que permitiu a análise do ambiente por um período de 8 meses com registo de métricas de acessos, utilizadas para dimensionar a arquitetura proposta. Na implementação da abordagem em cloud valida-se a redução de 85% de infraestrutura de servidores, a latência de comunicação, taxas de transferência de dados, latências de serviços, impacto de protocolos na transferência de dados, overhead da virtualização, migração de serviços na infraestrutura física, tempos de backup e restauro de informação e a segurança na DMZ.
Resumo:
Dissertação de natureza científica realizada para obtenção do grau de Mestre em Engenharia de Redes de Computadores e Multimédia
Resumo:
Relatório de Projeto realizado para obtenção do grau de Mestre em Engenharia Informática e de Computadores
Resumo:
Projeto para obtenção do grau de Mestre em Engenharia Informática e de Computadores
Resumo:
Portugal joined the effort to create the EPOS infrastructure in 2008, and it became immediately apparent that a national network of Earth Sciences infrastructures was required to participate in the initiative. At that time, FCT was promoting the creation of a national infrastructure called RNG - Rede Nacional de Geofísica (National Geophysics Network). A memorandum of understanding had been agreed upon, and it seemed therefore straightforward to use RNG (enlarged to include relevant participants that were not RNG members) as the Portuguese partner to EPOS-PP. However, at the time of signature of the EPOS-PP contract with the European Commission (November 2010), RNG had not gained formal identity yet, and IST (one of the participants) signed the grant agreement on behalf of the Portuguese consortium. During 2011 no progress was made towards the formal creation of RNG, and the composition of the network – based on proposals submitted to a call issued in 2002 – had by then become obsolete. On February 2012, the EPOS national contact point was mandated by the representatives of the participating national infrastructures to request from FCT the recognition of a new consortium - C3G, Collaboratory for Geology, Geodesy and Geophysics - as the Portuguese partner to EPOS-PP. This request was supported by formal letters from the following institutions: ‐ LNEG. Laboratório Nacional de Energia e Geologia (National Geological Survey); ‐ IGP ‐ Instituto Geográfico Português (National Geographic Institute); ‐ IDL, Instituto Dom Luiz – Laboratório Associado ‐ CGE, Centro de Geofísica de Évora; ‐ FCTUC, Faculdade de Ciências e Tecnologia da Universidade de Coimbra; ‐ Instituto Superior de Engenharia de Lisboa; ‐ Instituto Superior Técnico; ‐ Universidade da Beira Interior. While Instituto de Meteorologia (Meteorological Institute, in charge of the national seismographic network) actively supports the national participation in EPOS, a letter of support was not feasible in view of the organic changes underway at the time. C3G aims at the integration and coordination, at national level, of existing Earth Sciences infrastructures, namely: ‐ seismic and geodetic networks (IM, IST, IDL, CGE); ‐ rock physics laboratories (ISEL); ‐ geophysical laboratories dedicated to natural resources and environmental studies; ‐ geological and geophysical data repositories; ‐ facilities for data storage and computing resources. The C3G - Collaboratory for Geology, Geodesy and Geophysics will be coordinated by Universidade da Beira Interior, whose Department of Informatics will host the C3G infrastructure.
Resumo:
3D laser scanning is becoming a standard technology to generate building models of a facility's as-is condition. Since most constructions are constructed upon planar surfaces, recognition of them paves the way for automation of generating building models. This paper introduces a new logarithmically proportional objective function that can be used in both heuristic and metaheuristic (MH) algorithms to discover planar surfaces in a point cloud without exploiting any prior knowledge about those surfaces. It can also adopt itself to the structural density of a scanned construction. In this paper, a metaheuristic method, genetic algorithm (GA), is used to test this introduced objective function on a synthetic point cloud. The results obtained show the proposed method is capable to find all plane configurations of planar surfaces (with a wide variety of sizes) in the point cloud with a minor distance to the actual configurations. © 2014 IEEE.
Resumo:
Single processor architectures are unable to provide the required performance of high performance embedded systems. Parallel processing based on general-purpose processors can achieve these performances with a considerable increase of required resources. However, in many cases, simplified optimized parallel cores can be used instead of general-purpose processors achieving better performance at lower resource utilization. In this paper, we propose a configurable many-core architecture to serve as a co-processor for high-performance embedded computing on Field-Programmable Gate Arrays. The architecture consists of an array of configurable simple cores with support for floating-point operations interconnected with a configurable interconnection network. For each core it is possible to configure the size of the internal memory, the supported operations and number of interfacing ports. The architecture was tested in a ZYNQ-7020 FPGA in the execution of several parallel algorithms. The results show that the proposed many-core architecture achieves better performance than that achieved with a parallel generalpurpose processor and that up to 32 floating-point cores can be implemented in a ZYNQ-7020 SoC FPGA.
Resumo:
The rapidly increasing computing power, available storage and communication capabilities of mobile devices makes it possible to start processing and storing data locally, rather than offloading it to remote servers; allowing scenarios of mobile clouds without infrastructure dependency. We can now aim at connecting neighboring mobile devices, creating a local mobile cloud that provides storage and computing services on local generated data. In this paper, we describe an early overview of a distributed mobile system that allows accessing and processing of data distributed across mobile devices without an external communication infrastructure. Copyright © 2015 ICST.
Resumo:
This paper is about a PV system linked to the electric grid through power converters under cloud scope. The PV system is modeled by the five parameters equivalent circuit and a MPPT procedure is integrated into the modeling. The modeling for the converters models the association of a DC-DC boost with a three-level inverter. PI controllers are used with PWM by sliding mode control associated with space vector modulation controlling the booster and the inverter. A case study addresses a simulation to assess the performance of a PV system linked to the electric grid. Conclusions regarding the integration of the PV system into the electric grid are presented. © IFIP International Federation for Information Processing 2015.