744 resultados para Cloud Computing, attori, piattaforme, Pattern, Orleans


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nos últimos anos o aumento exponencial da utilização de dispositivos móveis e serviços disponibilizados na “Cloud” levou a que a forma como os sistemas são desenhados e implementados mudasse, numa perspectiva de tentar alcançar requisitos que até então não eram essenciais. Analisando esta evolução, com o enorme aumento dos dispositivos móveis, como os “smartphones” e “tablets” fez com que o desenho e implementação de sistemas distribuidos fossem ainda mais importantes nesta área, na tentativa de promover sistemas e aplicações que fossem mais flexíveis, robutos, escaláveis e acima de tudo interoperáveis. A menor capacidade de processamento ou armazenamento destes dispositivos tornou essencial o aparecimento e crescimento de tecnologias que prometem solucionar muitos dos problemas identificados. O aparecimento do conceito de Middleware visa solucionar estas lacunas nos sistemas distribuidos mais evoluídos, promovendo uma solução a nível de organização e desenho da arquitetura dos sistemas, ao memo tempo que fornece comunicações extremamente rápidas, seguras e de confiança. Uma arquitetura baseada em Middleware visa dotar os sistemas de um canal de comunicação que fornece uma forte interoperabilidade, escalabilidade, e segurança na troca de mensagens, entre outras vantagens. Nesta tese vários tipos e exemplos de sistemas distribuídos e são descritos e analisados, assim como uma descrição em detalhe de três protocolos (XMPP, AMQP e DDS) de comunicação, sendo dois deles (XMPP e AMQP) utilzados em projecto reais que serão descritos ao longo desta tese. O principal objetivo da escrita desta tese é demonstrar o estudo e o levantamento do estado da arte relativamente ao conceito de Middleware aplicado a sistemas distribuídos de larga escala, provando que a utilização de um Middleware pode facilitar e agilizar o desenho e desenvolvimento de um sistema distribuído e traz enormes vantagens num futuro próximo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L’infonuage est un nouveau paradigme de services informatiques disponibles à la demande qui a connu une croissance fulgurante au cours de ces dix dernières années. Le fournisseur du modèle de déploiement public des services infonuagiques décrit le service à fournir, le prix, les pénalités en cas de violation des spécifications à travers un document. Ce document s’appelle le contrat de niveau de service (SLA). La signature de ce contrat par le client et le fournisseur scelle la garantie de la qualité de service à recevoir. Ceci impose au fournisseur de gérer efficacement ses ressources afin de respecter ses engagements. Malheureusement, la violation des spécifications du SLA se révèle courante, généralement en raison de l’incertitude sur le comportement du client qui peut produire un nombre variable de requêtes vu que les ressources lui semblent illimitées. Ce comportement peut, dans un premier temps, avoir un impact direct sur la disponibilité du service. Dans un second temps, des violations à répétition risquent d'influer sur le niveau de confiance du fournisseur et sur sa réputation à respecter ses engagements. Pour faire face à ces problèmes, nous avons proposé un cadre d’applications piloté par réseau bayésien qui permet, premièrement, de classifier les fournisseurs dans un répertoire en fonction de leur niveau de confiance. Celui-ci peut être géré par une entité tierce. Un client va choisir un fournisseur dans ce répertoire avant de commencer à négocier le SLA. Deuxièmement, nous avons développé une ontologie probabiliste basée sur un réseau bayésien à entités multiples pouvant tenir compte de l’incertitude et anticiper les violations par inférence. Cette ontologie permet de faire des prédictions afin de prévenir des violations en se basant sur les données historiques comme base de connaissances. Les résultats obtenus montrent l’efficacité de l’ontologie probabiliste pour la prédiction de violation dans l’ensemble des paramètres SLA appliqués dans un environnement infonuagique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

TransRed.com es un servicio de bolsa de carga terrestre mediante el cual los usuarios actualizan la oferta y demanda del servicio en tiempo real (vía internet o teléfono), donde se consulta la disponibilidad de carga o de camiones de los afiliados a la asistencia remota. Este servicio se divide en dos: TransRed Empresaral (TRE) y TransRed Independiente (TRI) los usuarios de estos servicios serán empresas de transporte de carga terrestre y transportadores independientes. Este servicio actualmente no es prestado por ninguna empresa en Colombia y existe una necesidad latente del mismo, lo cual es una oportunidad que TransRed.com quiere aprovechar innovando en el sector. El tamaño del mercado potencial es de 1.849 empresas de transporte de carga terrestre y 47.000 transportadores independientes lo cual podría representar una facturación total de más de 55.000 millones a un precio por cuenta con alta aceptación de mercado de 80.000 mensuales para las TRE y 60.000 mensuales para las TRI. El objetivo de ventas planteado en este proyecto es de 504 cuentas TRE y 2.020 TRI para el tercer año, que representa el 27% del mercado TRE y el 4% del mercado TRI. Teniendo en cuenta que el nivel de aceptación del servicio está entre 70% y 80%, se pretende alcanzar el objetivo de ventas con una fuerza comercial distribuida en las cuatro principales zonas de concentración de nuestro mercado objetivo (Bogotá, Antioquia, Valle y Atlantico). Los resultados financieros del proyecto nos demuestran que cumpliendo los objetivos planteados y con la estructura de costos diseñada se obtiene un ROI de 54,33% para el tercer año, un periodo de recuperación de la inversión de 1 año y dos meses y un margen de utilidad neta de 24,47% en el tercer año. Este servicio permitirá a sus usuarios optimizar sus operaciones mediante el uso de tecnología de vanguardia, mejorando la competitividad del sector e integrando los diferentes actores que participan en él.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El trabajo de grado titulado “Propuesta de Computación en la Nube para los Libreros del Sector de San Victorino en Bogotá para el Año 2012. Estudio de Caso”, tiene como objetivo elaborar una propuesta de computación en la nube que sea aplicable a uno de los sectores más dinámicos del centro de Bogotá como es el de los libreros de San Victorino; Este trabajo busca la apropiación de las herramientas que proporcionan las nuevas tecnologías de la información y las comunicaciones en la disciplina de la administración de empresas, como un nuevo escenario de competitividad ajustado a las nuevas condiciones de las organizaciones. La computación en la nube como nueva herramienta tecnológica, permite que diversos usuarios - empresas accedan a la información consolidada, la cual puede ser aportada por ellos mismos, para brindar a los clientes un servicio más eficiente y eficaz, con bajos costos y generando innumerables ventajas que aportan a la competitividad de cada una de las unidades de negocio, las cuales se consolidan como una única organización o clúster. En este sentido se determinó, cómo el sector de San Victorino, en particular el de los libreros, comparten una serie de características y metas similares, que permiten su implantación con los beneficios que se señalan en este trabajo de grado, permitiendo el acceso a información general de productos editoriales e inventarios y la ubicación exacta del librero que lo posee.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Animation on Green IT Container EdShare to be used to assemble resources developed by team for coursework 2. The metadata will be updated by the group, and description modified when the resource set has been completed. The resource has been set up with an instructional readme file. This will be replaced by each group with a new readme file. Each group needs to update this description. Further instructions in readme.txt

Relevância:

100.00% 100.00%

Publicador:

Resumo:

These slides give the instructuions for completing the COMP1214 team project on cloud and mobile computing

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Las tecnologías de la información han empezado a ser un factor importante a tener en cuenta en cada uno de los procesos que se llevan a cabo en la cadena de suministro. Su implementación y correcto uso otorgan a las empresas ventajas que favorecen el desempeño operacional a lo largo de la cadena. El desarrollo y aplicación de software han contribuido a la integración de los diferentes miembros de la cadena, de tal forma que desde los proveedores hasta el cliente final, perciben beneficios en las variables de desempeño operacional y nivel de satisfacción respectivamente. Por otra parte es importante considerar que su implementación no siempre presenta resultados positivos, por el contrario dicho proceso de implementación puede verse afectado seriamente por barreras que impiden maximizar los beneficios que otorgan las TIC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En la segunda d??cada del siglo XXI se asiste a una completa transformaci??n en el proceso de ense??anza-aprendizaje de las lenguas extranjeras en general, y del espa??ol, en particular. En este nuevo contexto, las nuevas tecnolog??as aplicadas al ??mbito educativo juegan un papel muy importante. Este universo es conocido como la Web 2.0. Se presentan las ventajas e inconvenientes de la utilizaci??n de estas tecnolog??as en la clase de idiomas centr??ndose en cuatro herramientas digitales que ofrecen m??s posibilidades para dinamizar las clases de espa??ol: los blogs, los wikis, el podcasting y Google Docs o el cloud computing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This Policy Contribution assesses the broad obstacles hampering ICT-led growth in Europe and identifies the main areas in which policy could unlock the greatest value. We review estimates of the value that could be generated through take-up of various technologies and carry out a broad matching with policy areas. According to the literature survey and the collected estimates, the areas in which the right policies could unlock the greatest ICT-led growth are product and labour market regulations and the European Single Market. These areas should be reformed to make European markets more flexible and competitive. This would promote wider adoption of modern data-driven organisational and management practices thereby helping to close the productivity gap between the United States and the European Union. Gains could also be made in the areas of privacy, data security, intellectual property and liability pertaining to the digital economy, especially cloud computing, and next generation network infrastructure investment. Standardisation and spectrum allocation issues are found to be important, though to a lesser degree. Strong complementarities between the analysed technologies suggest, however, that policymakers need to deal with all of the identified obstacles in order to fully realise the potential of ICT to spur long-term growth beyond the partial gains that we report.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A full assessment of para-­virtualization is important, because without knowledge about the various overheads, users can not understand whether using virtualization is a good idea or not. In this paper we are very interested in assessing the overheads of running various benchmarks on bare-­‐metal, as well as on para-­‐virtualization. The idea is to see what the overheads of para-­‐ virtualization are, as well as looking at the overheads of turning on monitoring and logging. The knowledge from assessing various benchmarks on these different systems will help a range of users understand the use of virtualization systems. In this paper we assess the overheads of using Xen, VMware, KVM and Citrix, see Table 1. These different virtualization systems are used extensively by cloud-­‐users. We are using various Netlib1 benchmarks, which have been developed by the University of Tennessee at Knoxville (UTK), and Oak Ridge National Laboratory (ORNL). In order to assess these virtualization systems, we run the benchmarks on bare-­‐metal, then on the para-­‐virtualization, and finally we turn on monitoring and logging. The later is important as users are interested in Service Level Agreements (SLAs) used by the Cloud providers, and the use of logging is a means of assessing the services bought and used from commercial providers. In this paper we assess the virtualization systems on three different systems. We use the Thamesblue supercomputer, the Hactar cluster and IBM JS20 blade server (see Table 2), which are all servers available at the University of Reading. A functional virtualization system is multi-­‐layered and is driven by the privileged components. Virtualization systems can host multiple guest operating systems, which run on its own domain, and the system schedules virtual CPUs and memory within each Virtual Machines (VM) to make the best use of the available resources. The guest-­‐operating system schedules each application accordingly. You can deploy virtualization as full virtualization or para-­‐virtualization. Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system, where the guest operating systems can run. No modifications are needed in the guest OS or application, e.g. the guest OS or application is not aware of the virtualized environment and runs normally. Para-­‐virualization requires user modification of the guest operating systems, which runs on the virtual machines, e.g. these guest operating systems are aware that they are running on a virtual machine, and provide near-­‐native performance. You can deploy both para-­‐virtualization and full virtualization across various virtualized systems. Para-­‐virtualization is an OS-­‐assisted virtualization; where some modifications are made in the guest operating system to enable better performance. In this kind of virtualization, the guest operating system is aware of the fact that it is running on the virtualized hardware and not on the bare hardware. In para-­‐virtualization, the device drivers in the guest operating system coordinate the device drivers of host operating system and reduce the performance overheads. The use of para-­‐virtualization [0] is intended to avoid the bottleneck associated with slow hardware interrupts that exist when full virtualization is employed. It has revealed [0] that para-­‐ virtualization does not impose significant performance overhead in high performance computing, and this in turn this has implications for the use of cloud computing for hosting HPC applications. The “apparent” improvement in virtualization has led us to formulate the hypothesis that certain classes of HPC applications should be able to execute in a cloud environment, with minimal performance degradation. In order to support this hypothesis, first it is necessary to define exactly what is meant by a “class” of application, and secondly it will be necessary to observe application performance, both within a virtual machine and when executing on bare hardware. A further potential complication is associated with the need for Cloud service providers to support Service Level Agreements (SLA), so that system utilisation can be audited.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advances in hardware and software technology enable us to collect, store and distribute large quantities of data on a very large scale. Automatically discovering and extracting hidden knowledge in the form of patterns from these large data volumes is known as data mining. Data mining technology is not only a part of business intelligence, but is also used in many other application areas such as research, marketing and financial analytics. For example medical scientists can use patterns extracted from historic patient data in order to determine if a new patient is likely to respond positively to a particular treatment or not; marketing analysts can use extracted patterns from customer data for future advertisement campaigns; finance experts have an interest in patterns that forecast the development of certain stock market shares for investment recommendations. However, extracting knowledge in the form of patterns from massive data volumes imposes a number of computational challenges in terms of processing time, memory, bandwidth and power consumption. These challenges have led to the development of parallel and distributed data analysis approaches and the utilisation of Grid and Cloud computing. This chapter gives an overview of parallel and distributed computing approaches and how they can be used to scale up data mining to large datasets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces an architecture for identifying and modelling in real-time at a copper mine using new technologies as M2M and cloud computing with a server in the cloud and an Android client inside the mine. The proposed design brings up pervasive mining, a system with wider coverage, higher communication efficiency, better fault-tolerance, and anytime anywhere availability. This solution was designed for a plant inside the mine which cannot tolerate interruption and for which their identification in situ, in real time, is an essential part of the system to control aspects such as instability by adjusting their corresponding parameters without stopping the process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Body Sensor Networks (BSNs) have been recently introduced for the remote monitoring of human activities in a broad range of application domains, such as health care, emergency management, fitness and behaviour surveillance. BSNs can be deployed in a community of people and can generate large amounts of contextual data that require a scalable approach for storage, processing and analysis. Cloud computing can provide a flexible storage and processing infrastructure to perform both online and offline analysis of data streams generated in BSNs. This paper proposes BodyCloud, a SaaS approach for community BSNs that supports the development and deployment of Cloud-assisted BSN applications. BodyCloud is a multi-tier application-level architecture that integrates a Cloud computing platform and BSN data streams middleware. BodyCloud provides programming abstractions that allow the rapid development of community BSN applications. This work describes the general architecture of the proposed approach and presents a case study for the real-time monitoring and analysis of cardiac data streams of many individuals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the increase in e-commerce and the digitisation of design data and information,the construction sector has become reliant upon IT infrastructure and systems. The design and production process is more complex, more interconnected, and reliant upon greater information mobility, with seamless exchange of data and information in real time. Construction small and medium-sized enterprises (CSMEs), in particular,the speciality contractors, can effectively utilise cost-effective collaboration-enabling technologies, such as cloud computing, to help in the effective transfer of information and data to improve productivity. The system dynamics (SD) approach offers a perspective and tools to enable a better understanding of the dynamics of complex systems. This research focuses upon system dynamics methodology as a modelling and analysis tool in order to understand and identify the key drivers in the absorption of cloud computing for CSMEs. The aim of this paper is to determine how the use of system dynamics (SD) can improve the management of information flow through collaborative technologies leading to improved productivity. The data supporting the use of system dynamics was obtained through a pilot study consisting of questionnaires and interviews from five CSMEs in the UK house-building sector.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Användning av molntjänster har gjort forensiska undersökningar mer komplicerade. Däremot finns det goda förutsättningar om molnleverantörerna skapar tjänster för att få ut all information. Det skulle göra det enklare och mer tillförlitligt. Informationen som ska tas ut från molntjänsterna är svår att få ut på ett korrekt sätt. Undersökningen görs inte på en skrivskyddad kopia, utan i en miljö som riskerar att förändras. Det är då möjligt att ändringar görs under tiden datan hämtas ut, vilket inte alltid syns. Det går heller inte att jämföra skillnaderna genom att ta hashsummor på filerna som görs vid forensiska undersökningar av datorer. Därför är det viktigt att dokumentera hur informationen har tagits ut, helst genom att filma datorskärmen under tiden informationen tas ut. Informationen finns sparad på flera platser då molntjänsterna Office 365 och Google Apps används, både i molnet och på den eller de datorer som har använts för att ansluta till molntjänsten. Webbläsare sparar mycket information om vad som har gjorts. Därför är det viktigt att det går att ta reda på vilka datorer som har använts för att ansluta sig till molntjänsten, vilket idag inte möjligt. Om det är möjligt att undersöka de datorer som använts kan bevis som inte finns kvar i molnet hittas. Det bästa ur forensisk synvinkel skulle vara om leverantörerna av molntjänster erbjöd en tjänst som hämtar ut all data som rör en användare, inklusive alla relevanta loggar. Då skulle det ske på ett mycket säkrare sätt, då det inte skulle gå att ändra informationen under tiden den hämtas ut.