54 resultados para cloud computing, hypervisor, virtualizzazione, live migration, infrastructure as a service


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nos últimos anos o aumento exponencial da utilização de dispositivos móveis e serviços disponibilizados na “Cloud” levou a que a forma como os sistemas são desenhados e implementados mudasse, numa perspectiva de tentar alcançar requisitos que até então não eram essenciais. Analisando esta evolução, com o enorme aumento dos dispositivos móveis, como os “smartphones” e “tablets” fez com que o desenho e implementação de sistemas distribuidos fossem ainda mais importantes nesta área, na tentativa de promover sistemas e aplicações que fossem mais flexíveis, robutos, escaláveis e acima de tudo interoperáveis. A menor capacidade de processamento ou armazenamento destes dispositivos tornou essencial o aparecimento e crescimento de tecnologias que prometem solucionar muitos dos problemas identificados. O aparecimento do conceito de Middleware visa solucionar estas lacunas nos sistemas distribuidos mais evoluídos, promovendo uma solução a nível de organização e desenho da arquitetura dos sistemas, ao memo tempo que fornece comunicações extremamente rápidas, seguras e de confiança. Uma arquitetura baseada em Middleware visa dotar os sistemas de um canal de comunicação que fornece uma forte interoperabilidade, escalabilidade, e segurança na troca de mensagens, entre outras vantagens. Nesta tese vários tipos e exemplos de sistemas distribuídos e são descritos e analisados, assim como uma descrição em detalhe de três protocolos (XMPP, AMQP e DDS) de comunicação, sendo dois deles (XMPP e AMQP) utilzados em projecto reais que serão descritos ao longo desta tese. O principal objetivo da escrita desta tese é demonstrar o estudo e o levantamento do estado da arte relativamente ao conceito de Middleware aplicado a sistemas distribuídos de larga escala, provando que a utilização de um Middleware pode facilitar e agilizar o desenho e desenvolvimento de um sistema distribuído e traz enormes vantagens num futuro próximo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the CloudAnchor brokerage platform for the transaction of single provider as well as federated Infrastructure as a Service (IaaS) resources. The platform, which is a layered Multi-Agent System (MAS), provides multiple services, including (consumer or provider) business registration and deregistration, provider coalition creation and termination, provider lookup and invitation and negotiation services regarding brokerage, coalitions and resources. Providers, consumers and virtual providers, representing provider coalitions, are modelled by dedicated agents within the platform. The main goal of the platform is to negotiate and establish Service Level Agreements (SLA). In particular, the platform contemplates the establishment of brokerage SLA – bSLA – between the platform and each provider or consumer, coalition SLA – cSLA – between the members of a coalition of providers and resource SLA – rSLA – between a consumer and a provider. Federated resources are detained and negotiated by virtual providers on behalf of the corresponding coalitions of providers.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Empowered by virtualisation technology, cloud infrastructures enable the construction of flexi- ble and elastic computing environments, providing an opportunity for energy and resource cost optimisation while enhancing system availability and achieving high performance. A crucial re- quirement for effective consolidation is the ability to efficiently utilise system resources for high- availability computing and energy-efficiency optimisation to reduce operational costs and carbon footprints in the environment. Additionally, failures in highly networked computing systems can negatively impact system performance substantially, prohibiting the system from achieving its initial objectives. In this paper, we propose algorithms to dynamically construct and readjust vir- tual clusters to enable the execution of users’ jobs. Allied with an energy optimising mechanism to detect and mitigate energy inefficiencies, our decision-making algorithms leverage virtuali- sation tools to provide proactive fault-tolerance and energy-efficiency to virtual clusters. We conducted simulations by injecting random synthetic jobs and jobs using the latest version of the Google cloud tracelogs. The results indicate that our strategy improves the work per Joule ratio by approximately 12.9% and the working efficiency by almost 15.9% compared with other state-of-the-art algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is foreseen that future dependable real-time systems will also have to meet flexibility, adaptability and reconfigurability requirements. Considering the distributed nature of these computing systems, a communication infrastructure that permits to fulfil all those requirements is thus of major importance. Although Ethernet has been used primarily as an information network, there is a strong belief that some very recent technological advances will enable its use in dependable applications with real-time requirements. Indeed, several recently standardised mechanisms associated with Switched-Ethernet seem to be promising to enable communication infrastructures to support hard real-time, reliability and flexible distributed applications. This paper describes the motivation and the work being developed within the CIDER (Communication Infrastructure for Dependable Evolvable Real-Time Systems) project, which envisages the use of COTS Ethernet as an enabling technology for future dependable real-time systems. It is foreseen that the CIDER approach will constitute a relevant stream of research since it will bring together cutting edge research in the field of real-time and dependable distributed systems and the industrial eagerness to expand Ethernet responsabilities to support dependable real-time applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Urban Computing (UrC) provides users with the situation-proper information by considering context of users, devices, and social and physical environment in urban life. With social network services, UrC makes it possible for people with common interests to organize a virtual-society through exchange of context information among them. In these cases, people and personal devices are vulnerable to fake and misleading context information which is transferred from unauthorized and unauthenticated servers by attackers. So called smart devices which run automatically on some context events are more vulnerable if they are not prepared for attacks. In this paper, we illustrate some UrC service scenarios, and show important context information, possible threats, protection method, and secure context management for people.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We describe a novel approach to scheduling resolution by combining Autonomic Computing (AC), Multi-Agent Systems (MAS) and Nature Inspired Optimization Techniques (NIT). Autonomic Computing has emerged as paradigm aiming at embedding applications with a management structure similar to a central nervous system. A natural Autonomic Computing evolution in relation to Current Computing is to provide systems with Self-Managing ability with a minimum human interference. In this paper we envisage the use of Multi-Agent Systems paradigm for supporting dynamic and distributed scheduling in Manufacturing Systems with Autonomic properties, in order to reduce the complexity of managing systems and human interference. Additionally, we consider the resolution of realistic problems. The scheduling of a Cutting and Treatment Stainless Steel Sheet Line will be evaluated. Results show that proposed approach has advantages when compared with other scheduling systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mestrado em Engenharia Electrotécnica e de Computadores

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Learning and teaching processes, like all human activities, can be mediated through the use of tools. Information and communication technologies are now widespread within education. Their use in the daily life of teachers and learners affords engagement with educational activities at any place and time and not necessarily linked to an institution or a certificate. In the absence of formal certification, learning under these circumstances is known as informal learning. Despite the lack of certification, learning with technology in this way presents opportunities to gather information about and present new ways of exploiting an individual’s learning. Cloud technologies provide ways to achieve this through new architectures, methodologies, and workflows that facilitate semantic tagging, recognition, and acknowledgment of informal learning activities. The transparency and accessibility of cloud services mean that institutions and learners can exploit existing knowledge to their mutual benefit. The TRAILER project facilitates this aim by providing a technological framework using cloud services, a workflow, and a methodology. The services facilitate the exchange of information and knowledge associated with informal learning activities ranging from the use of social software through widgets, computer gaming, and remote laboratory experiments. Data from these activities are shared among institutions, learners, and workers. The project demonstrates the possibility of gathering information related to informal learning activities independently of the context or tools used to carry them out.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A navegação e a interpretação do meio envolvente por veículos autónomos em ambientes não estruturados continua a ser um grande desafio na actualidade. Sebastian Thrun, descreve em [Thr02], que o problema do mapeamento em sistemas robóticos é o da aquisição de um modelo espacial do meio envolvente do robô. Neste contexto, a integração de sistemas sensoriais em plataformas robóticas, que permitam a construção de mapas do mundo que as rodeia é de extrema importância. A informação recolhida desses dados pode ser interpretada, tendo aplicabilidade em tarefas de localização, navegação e manipulação de objectos. Até à bem pouco tempo, a generalidade dos sistemas robóticos que realizavam tarefas de mapeamento ou Simultaneous Localization And Mapping (SLAM), utilizavam dispositivos do tipo laser rangefinders e câmaras stereo. Estes equipamentos, para além de serem dispendiosos, fornecem apenas informação bidimensional, recolhidas através de cortes transversais 2D, no caso dos rangefinders. O paradigma deste tipo de tecnologia mudou consideravelmente, com o lançamento no mercado de câmaras RGB-D, como a desenvolvida pela PrimeSense TM e o subsequente lançamento da Kinect, pela Microsoft R para a Xbox 360 no final de 2010. A qualidade do sensor de profundidade, dada a natureza de baixo custo e a sua capacidade de aquisição de dados em tempo real, é incontornável, fazendo com que o sensor se tornasse instantaneamente popular entre pesquisadores e entusiastas. Este avanço tecnológico deu origem a várias ferramentas de desenvolvimento e interacção humana com este tipo de sensor, como por exemplo a Point Cloud Library [RC11] (PCL). Esta ferramenta tem como objectivo fornecer suporte para todos os blocos de construção comuns que uma aplicação 3D necessita, dando especial ênfase ao processamento de nuvens de pontos de n dimensões adquiridas a partir de câmaras RGB-D, bem como scanners laser, câmaras Time-of-Flight ou câmaras stereo. Neste contexto, é realizada nesta dissertação, a avaliação e comparação de alguns dos módulos e métodos constituintes da biblioteca PCL, para a resolução de problemas inerentes à construção e interpretação de mapas, em ambientes indoor não estruturados, utilizando os dados provenientes da Kinect. A partir desta avaliação, é proposta uma arquitectura de sistema que sistematiza o registo de nuvens de pontos, correspondentes a vistas parciais do mundo, num modelo global consistente. Os resultados da avaliação realizada à biblioteca PCL atestam a sua viabilidade, para a resolução dos problemas propostos. Prova da sua viabilidade, são os resultados práticos obtidos, da implementação da arquitectura de sistema proposta, que apresenta resultados de desempenho interessantes, como também boas perspectivas de integração deste tipo de conceitos e tecnologia em plataformas robóticas desenvolvidas no âmbito de projectos do Laboratório de Sistemas Autónomos (LSA).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bisphenol A (BPA) is an endocrine disrupting chemical (EDC) whose migration from food packaging is recognized worldwide. However, the real overall food contamination and related consequences are yet largely unknown. Among humans, children’s exposure to BPA has been emphasized because of the immaturity of their biological systems. The main aim of this study was to assess the reproductive impact of BPA leached from commercially available plastic containers used or related to child nutrition, performing ecotoxicological tests using the biomonitoring species Daphnia magna. Acute and chronic tests, as well as single and multigenerational tests were done. Migration of BPA from several baby bottles and other plastic containers evaluated by GC-MS indicated that a broader range of foodstuff may be contaminated when packed in plastics. Ecotoxicological test results performed using defined concentrations of BPA were in agreement with literature, although a precocious maturity of daphnids was detected at 3.0 mg/L. Curiously, an increased reproductive output (neonates per female) was observed when daphnids were bred in the polycarbonate (PC) containers (145.1±4.3 % to 264.7±3.8 %), both in single as in multigenerational tests, in comparison with the negative control group (100.3±1.6 %). A strong correlated dose-dependent ecotoxicological effect was observed, providing evidence that BPA leached from plastic food packaging materials act as functional estrogen in vivo at very low concentrations. In contrast, neonate production by daphnids cultured in polypropylene and non-PC bottles was slightly but not significantly enhanced (92.5±2.0 % to 118.8±1.8 %). Multigenerational tests also revealed magnification of the adverse effects, not only on fecundity but also on mortality, which represents a worrying trend for organisms that are chronically exposed to xenoestrogens for many generations. Two plausible explanations for the observed results could be given: a non-monotonic dose–response relationship or a mixture toxicity effect.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The relation between the information/knowledge expression and the physical expression can be involved as one of items for an ambient intelligent computing [2],[3]. Moreover, because there are so many contexts around user/spaces during a user movement, all appplcation which are using AmI for users are based on the relation between user devices and environments. In these situations, it is possible that the AmI may output the wrong result from unreliable contexts by attackers. Recently, establishing a server have been utilizes, so finding secure contexts and make contexts of higher security level for save communication have been given importance. Attackers try to put their devices on the expected path of all users in order to obtain users informationillegally or they may try to broadcast their SPAMS to users. This paper is an extensionof [11] which studies the Security Grade Assignment Model (SGAM) to set Cyber-Society Organization (CSO).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Field communication systems (fieldbuses) are widely used as the communication support for distributed computer-controlled systems (DCCS) within all sort of process control and manufacturing applications. There are several advantages in the use of fieldbuses as a replacement for the traditional point-to-point links between sensors/actuators and computer-based control systems, within which the most relevant is the decentralisation and distribution of the processing power over the field. A widely used fieldbus is the WorldFIP, which is normalised as European standard EN 50170. Using WorldFIP to support DCCS, an important issue is “how to guarantee the timing requirements of the real-time traffic?” WorldFIP has very interesting mechanisms to schedule data transfers, since it explicitly distinguishes periodic and aperiodic traffic. In this paper, we describe how WorldFIP handles these two types of traffic, and more importantly, we provide a comprehensive analysis on how to guarantee the timing requirements of the real-time traffic.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Managing the physical and compute infrastructure of a large data center is an embodiment of a Cyber-Physical System (CPS). The physical parameters of the data center (such as power, temperature, pressure, humidity) are tightly coupled with computations, even more so in upcoming data centers, where the location of workloads can vary substantially due, for example, to workloads being moved in a cloud infrastructure hosted in the data center. In this paper, we describe a data collection and distribution architecture that enables gathering physical parameters of a large data center at a very high temporal and spatial resolutionof the sensor measurements. We think this is an important characteristic to enable more accurate heat-flow models of the data center andwith them, _and opportunities to optimize energy consumption. Havinga high resolution picture of the data center conditions, also enables minimizing local hotspots, perform more accurate predictive maintenance (pending failures in cooling and other infrastructure equipment can be more promptly detected) and more accurate billing. We detail this architecture and define the structure of the underlying messaging system that is used to collect and distribute the data. Finally, we show the results of a preliminary study of a typical data center radio environment.