721 resultados para cloud computing, software as a service, SaaS, enterprise systems, IS success


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of the POBICOS project is a platform that facilitates the development and deployment of pervasive computing applications destined for networked, cooperating objects. POBICOS object communities are heterogeneous in terms of the sensing, actuating, and computing resources contributed by each object. Moreover, it is assumed that an object community is formed without any master plan; for example, it may emerge as a by-product of acquiring everyday, POBICOS-enabled objects by a household. As a result, the target object community is, at least partially, unknown to the application programmer, and so a POBICOS application should be able to deliver its functionality on top of diverse object communities (we call this opportunistic computing). The POBICOS platform includes a middleware offering a programming model for opportunistic computing, as well as development and monitoring tools. This paper briefly describes the tools produced in the first phase of the project. Also, the stakeholders using these tools are identified, and a development process for both the middleware and applications is presented. © 2009 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud computing is a technological advancementthat provide resources through internet on pay-as-you-go basis.Cloud computing uses virtualisation technology to enhance theefficiency and effectiveness of its advantages. Virtualisation isthe key to consolidate the computing resources to run multiple instances on each hardware, increasing the utilization rate of every resource, thus reduces the number of resources needed to buy, rack, power, cool, and manage. Cloud computing has very appealing features, however, lots of enterprises and users are still reluctant to move into cloud due to serious security concerns related to virtualisation layer. Thus, it is foremost important to secure the virtual environment.In this paper, we present an elastic framework to secure virtualised environment for trusted cloud computing called Server Virtualisation Security System (SVSS). SVSS provide security solutions located on hyper visor for Virtual Machines by deploying malicious activity detection techniques, network traffic analysis techniques, and system resource utilization analysis techniques.SVSS consists of four modules: Anti-Virus Control Module,Traffic Behavior Monitoring Module, Malicious Activity Detection Module and Virtualisation Security Management Module.A SVSS prototype has been deployed to validate its feasibility,efficiency and accuracy on Xen virtualised environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Differential equations are often directly solvable by analytical means only in their one dimensional version. Partial differential equations are generally not solvable by analytical means in two and three dimensions, with the exception of few special cases. In all other cases, numerical approximation methods need to be utilized. One of the most popular methods is the finite element method. The main areas of focus, here, are the Poisson heat equation and the plate bending equation. The purpose of this paper is to provide a quick walkthrough of the various approaches that the authors followed in pursuit of creating optimal solvers, accelerated with the use of graphical processing units, and comparing them in terms of accuracy and time efficiency with existing or self-made non-accelerated solvers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The design cycle for complex special-purpose computing systems is extremely costly and time-consuming. It involves a multiparametric design space exploration for optimization, followed by design verification. Designers of special purpose VLSI implementations often need to explore parameters, such as optimal bitwidth and data representation, through time-consuming Monte Carlo simulations. A prominent example of this simulation-based exploration process is the design of decoders for error correcting systems, such as the Low-Density Parity-Check (LDPC) codes adopted by modern communication standards, which involves thousands of Monte Carlo runs for each design point. Currently, high-performance computing offers a wide set of acceleration options that range from multicore CPUs to Graphics Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs). The exploitation of diverse target architectures is typically associated with developing multiple code versions, often using distinct programming paradigms. In this context, we evaluate the concept of retargeting a single OpenCL program to multiple platforms, thereby significantly reducing design time. A single OpenCL-based parallel kernel is used without modifications or code tuning on multicore CPUs, GPUs, and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL in order to introduce FPGAs as a potential platform to efficiently execute simulations coded in OpenCL. We use LDPC decoding simulations as a case study. Experimental results were obtained by testing a variety of regular and irregular LDPC codes that range from short/medium (e.g., 8,000 bit) to long length (e.g., 64,800 bit) DVB-S2 codes. We observe that, depending on the design parameters to be simulated, on the dimension and phase of the design, the GPU or FPGA may suit different purposes more conveniently, thus providing different acceleration factors over conventional multicore CPUs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Context. The jets of compact accreting objects are composed of electrons and a mixture of positrons and ions. These outflows impinge on the interstellar or intergalactic medium and both plasmas interact via collisionless processes. Filamentation (beam-Weibel) instabilities give rise to the growth of strong electromagnetic fields. These fields thermalize the interpenetrating plasmas. 

Aims. Hitherto, the effects imposed by a spatial non-uniformity on filamentation instabilities have remained unexplored. We examine the interaction between spatially uniform background electrons and a minuscule cloud of electrons and positrons. The cloud size is comparable to that created in recent laboratory experiments and such clouds may exist close to internal and external shocks of leptonic jets. The purpose of our study is to determine the prevalent instabilities, their ability to generate electromagnetic fields and the mechanism, by which the lepton micro-cloud transfers energy to the background plasma. 

Methods. A square micro-cloud of equally dense electrons and positrons impinges in our particle-in-cell (PIC) simulation on a spatially uniform plasma at rest. The latter consists of electrons with a temperature of 1 keV and immobile ions. The initially charge- and current neutral micro-cloud has a temperature of 100 keV and a side length of 2.5 plasma skin depths of the micro-cloud. The side length is given in the reference frame of the background plasma. The mean speed of the micro-cloud corresponds to a relativistic factor of 15, which is relevant for laboratory experiments and for relativistic astrophysical outflows. The spatial distributions of the leptons and of the electromagnetic fields are examined at several times. 

Results. A filamentation instability develops between the magnetic field carried by the micro-cloud and the background electrons. The electromagnetic fields, which grow from noise levels, redistribute the electrons and positrons within the cloud, which boosts the peak magnetic field amplitude. The current density and the moduli of the electromagnetic fields grow aperiodically in time and steadily along the direction that is anti-parallel to the cloud's velocity vector. The micro-cloud remains conjoined during the simulation. The instability induces an electrostatic wakefield in the background plasma. 

Conclusions. Relativistic clouds of leptons can generate and amplify magnetic fields even if they have a microscopic size, which implies that the underlying processes can be studied in the laboratory. The interaction of the localized magnetic field and high-energy leptons will give rise to synchrotron jitter radiation. The wakefield in the background plasma dissipates the kinetic energy of the lepton cloud. Even the fastest lepton micro-clouds can be slowed down by this collisionless mechanism. Moderately fast charge- and current neutralized lepton micro-clouds will deposit their energy close to relativistic shocks and hence they do not constitute an energy loss mechanism for the shock.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increasing adoption of cloud computing, social networking, mobile and big data technologies provide challenges and opportunities for both research and practice. Researchers face a deluge of data generated by social network platforms which is further exacerbated by the co-mingling of social network platforms and the emerging Internet of Everything. While the topicality of big data and social media increases, there is a lack of conceptual tools in the literature to help researchers approach, structure and codify knowledge from social media big data in diverse subject matter domains, many of whom are from nontechnical disciplines. Researchers do not have a general-purpose scaffold to make sense of the data and the complex web of relationships between entities, social networks, social platforms and other third party databases, systems and objects. This is further complicated when spatio-temporal data is introduced. Based on practical experience of working with social media datasets and existing literature, we propose a general research framework for social media research using big data. Such a framework assists researchers in placing their contributions in an overall context, focusing their research efforts and building the body of knowledge in a given discipline area using social media data in a consistent and coherent manner.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Emerging web applications like cloud computing, Big Data and social networks have created the need for powerful centres hosting hundreds of thousands of servers. Currently, the data centres are based on general purpose processors that provide high flexibility buts lack the energy efficiency of customized accelerators. VINEYARD aims to develop an integrated platform for energy-efficient data centres based on new servers with novel, coarse-grain and fine-grain, programmable hardware accelerators. It will, also, build a high-level programming framework for allowing end-users to seamlessly utilize these accelerators in heterogeneous computing systems by employing typical data-centre programming frameworks (e.g. MapReduce, Storm, Spark, etc.). This programming framework will, further, allow the hardware accelerators to be swapped in and out of the heterogeneous infrastructure so as to offer high flexibility and energy efficiency. VINEYARD will foster the expansion of the soft-IP core industry, currently limited in the embedded systems, to the data-centre market. VINEYARD plans to demonstrate the advantages of its approach in three real use-cases (a) a bio-informatics application for high-accuracy brain modeling, (b) two critical financial applications, and (c) a big-data analysis application.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O presente trabalho tem por objectivo estudar a caracterização e modelação de arquitecturas de rádio frequência para aplicações em rádios definidos por software e rádios cognitivos. O constante aparecimento no mercado de novos padrões e tecnologias para comunicações sem fios têm levantado algumas limitações à implementação de transceptores rádio de banda larga. Para além disso, o uso de sistemas reconfiguráveis e adaptáveis baseados no conceito de rádio definido por software e rádio cognitivo assegurará a evolução para a próxima geração de comunicações sem fios. A ideia base desta tese passa por resolver alguns problemas em aberto e propor avanços relevantes, tirando para isso partido das capacidades providenciadas pelos processadores digitais de sinal de forma a melhorar o desempenho global dos sistemas propostos. Inicialmente, serão abordadas várias estratégias para a implementação e projecto de transceptores rádio, concentrando-se sempre na aplicabilidade específica a sistemas de rádio definido por software e rádio cognitivo. Serão também discutidas soluções actuais de instrumentação capaz de caracterizar um dispositivo que opere simultaneamente nos domínios analógico e digital, bem como, os próximos passos nesta área de caracterização e modelação. Além disso, iremos apresentar novos formatos de modelos comportamentais construídos especificamente para a descrição e caracterização não-linear de receptores de amostragem passa-banda, bem como, para sistemas nãolineares que utilizem sinais multi-portadora. Será apresentada uma nova arquitectura suportada na avaliação estatística dos sinais rádio que permite aumentar a gama dinâmica do receptor em situações de multi-portadora. Da mesma forma, será apresentada uma técnica de maximização da largura de banda de recepção baseada na utilização do receptor de amostragem passa-banda no formato complexo. Finalmente, importa referir que todas as arquitecturas propostas serão acompanhadas por uma introdução teórica e simulações, sempre que possível, sendo após isto validadas experimentalmente por protótipos laboratoriais.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the impact of cloud computing and the use of GPUs on the performance of Autodock and Gromacs respectively. Cloud computing was applicable to reducing the ‘‘tail’’ seen in running Autodock on desktop grids and the GPU version of Gromacs showed significant improvement over the CPU version. A large (200,000 compounds) library of small molecules, seven sialic acid analogues of the putative substrate and 8000 sugar molecules were converted into pdbqt format and used to interrogate the Trichomonas vaginalis neuraminidase using Autodock Vina. Good binding energy was noted for some of the small molecules (~-9 kcal/mol), but the sugars bound with affinity of less than -7.6 kcal/mol. The screening of the sugar library resulted in a ‘‘top hit’’ with a-2,3-sialyllacto-N-fucopentaose III, a derivative of the sialyl Lewisx structure and a known substrate of the enzyme. Indeed in the top 100 hits 8 were related to this structure. A comparison of Autodock Vina and Autodock 4.2 was made for the high affinity small molecules and in some cases the results were superimposable whereas in others, the match was less good. The validation of this work will require extensive ‘‘wet lab’’ work to determine the utility of the workflow in the prediction of potential enzyme inhibitors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a layered Smart Grid architecture enhancing security and reliability, having the ability to act in order to maintain and correct infrastructure components without affecting the client service. The architecture presented is based in the core of well design software engineering, standing upon standards developed over the years. The layered Smart Grid offers a base tool to ease new standards and energy policies implementation. The ZigBee technology implementation test methodology for the Smart Grid is presented, and provides field tests using ZigBee technology to control the new Smart Grid architecture approach. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The corner stone of the interoperability of eLearning systems is the standard definition of learning objects. Nevertheless, for some domains this standard is insufficient to fully describe all the assets, especially when they are used as input for other eLearning services. On the other hand, a standard definition of learning objects in not enough to ensure interoperability among eLearning systems; they must also use a standard API to exchange learning objects. This paper presents the design and implementation of a service oriented repository of learning objects called crimsonHex. This repository is fully compliant with the existing interoperability standards and supports new definitions of learning objects for specialized domains. We illustrate this feature with the definition of programming problems as learning objects and its validation by the repository. This repository is also prepared to store usage data on learning objects to tailor the presentation order and adapt it to learner profiles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Personalised video can be achieved by inserting objects into a video play-out according to the viewer's profile. Content which has been authored and produced for general broadcast can take on additional commercial service features when personalised either for individual viewers or for groups of viewers participating in entertainment, training, gaming or informational activities. Although several scenarios and use-cases can be envisaged, we are focussed on the application of personalised product placement. Targeted advertising and product placement are currently garnering intense interest in the commercial networked media industries. Personalisation of product placement is a relevant and timely service for next generation online marketing and advertising and for many other revenue generating interactive services. This paper discusses the acquisition and insertion of media objects into a TV video play-out stream where the objects are determined by the profile of the viewer. The technology is based on MPEG-4 standards using object based video and MPEG-7 for metadata. No proprietary technology or protocol is proposed. To trade the objects into the video play-out, a Software-as-a-Service brokerage platform based on intelligent agent technology is adopted. Agencies, libraries and service providers are represented in a commercial negotiation to facilitate the contractual selection and usage of objects to be inserted into the video play-out.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Trabalho de Projeto para obtenção do grau de Mestre em Engenharia Civil

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Atualmente, as Tecnologias de Informação (TI) são cada vez mais vitais dentro das organizações. As TI são o motor de suporte do negócio. Para grande parte das organizações, o funcionamento e desenvolvimento das TI têm como base infraestruturas dedicadas (internas ou externas) denominadas por Centro de Dados (CD). Nestas infraestruturas estão concentrados os equipamentos de processamento e armazenamento de dados de uma organização, por isso, são e serão cada vez mais desafiadas relativamente a diversos fatores tais como a escalabilidade, disponibilidade, tolerância à falha, desempenho, recursos disponíveis ou disponibilizados, segurança, eficiência energética e inevitavelmente os custos associados. Com o aparecimento das tecnologias baseadas em computação em nuvem e virtualização, abrese todo um leque de novas formas de endereçar os desafios anteriormente descritos. Perante este novo paradigma, surgem novas oportunidades de consolidação dos CD que podem representar novos desafios para os gestores de CD. Por isso, é no mínimo irrealista para as organizações simplesmente eliminarem os CD ou transforma-los segundo os mais altos padrões de qualidade. As organizações devem otimizar os seus CD, contudo um projeto eficiente desta natureza, com capacidade para suportar as necessidades impostas pelo mercado, necessidades dos negócios e a velocidade da evolução tecnológica, exigem soluções complexas e dispendiosas tanto para a sua implementação como a sua gestão. É neste âmbito que surge o presente trabalho. Com o objetivo de estudar os CD inicia-se um estudo sobre esta temática, onde é detalhado o seu conceito, evolução histórica, a sua topologia, arquitetura e normas existentes que regem os mesmos. Posteriormente o estudo detalha algumas das principais tendências condicionadoras do futuro dos CD. Explorando o conhecimento teórico resultante do estudo anterior, desenvolve-se uma metodologia de avaliação dos CD baseado em critérios de decisão. O estudo culmina com uma análise sobre uma nova solução tecnológica e a avaliação de três possíveis cenários de implementação: a primeira baseada na manutenção do atual CD; a segunda baseada na implementação da nova solução em outro CD em regime de hosting externo; e finalmente a terceira baseada numa implementação em regime de IaaS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics