861 resultados para Cloud Computing Modelli di Business
Resumo:
Um das principais características da tecnologia de virtualização é a Live Migration, que permite que máquinas virtuais sejam movimentadas entre máquinas físicas sem a interrupção da execução. Esta característica habilita a implementação de políticas mais sofisticadas dentro de um ambiente de computação na nuvem, como a otimização de uso de energia elétrica e recursos computacionais. Entretanto, a Live Migration pode impor severa degradação de desempenho nas aplicações das máquinas virtuais e causar diversos impactos na infraestrutura dos provedores de serviço, como congestionamento de rede e máquinas virtuais co-existentes nas máquinas físicas. Diferente de diversos estudos, este estudo considera a carga de trabalho da máquina virtual um importante fator e argumenta que escolhendo o momento adequado para a migração da máquina virtual pode-se reduzir as penalidades impostas pela Live Migration. Este trabalho introduz a Application-aware Live Migration (ALMA), que intercepta as submissões de Live Migration e, baseado na carga de trabalho da aplicação, adia a migração para um momento mais favorável. Os experimentos conduzidos neste trabalho mostraram que a arquitetura reduziu em até 74% o tempo das migrações para os experimentos com benchmarks e em até 67% os experimentos com carga de trabalho real. A transferência de dados causada pela Live Migration foi reduzida em até 62%. Além disso, o presente introduz um modelo que faz a predição do custo da Live Migration para a carga de trabalho e também um algoritmo de migração que não é sensível à utilização de memória da máquina virtual.
Resumo:
El nuevo paradigma de computación en la nube posibilita la prestación de servicios por terceros. Entre ellos, se encuentra el de las bases de datos como servicio (DaaS) que permite externalizar la gestión y alojamiento del sistema de gestión de base de datos. Si bien esto puede resultar muy beneficioso (reducción de costes, gestión simplificada, etc.), plantea algunas dificultades respecto a la funcionalidad, el rendimiento y, en especial, la seguridad de dichos servicios. En este trabajo se describen algunas de las propuestas de seguridad en sistemas DaaS existentes y se realiza un análisis de sus características principales, introduciendo un nuevo enfoque basado en tecnologías no exclusivamente relacionales (NoSQL) que presenta ventajas respecto a la escalabilidad y el rendimiento.
Resumo:
Comunicación presentada en las V Jornadas de Computación Empotrada, Valladolid, 17-19 Septiembre 2014
Resumo:
Abstract Mobile Edge Computing enables the deployment of services, applications, content storage and processing in close proximity to mobile end users. This highly distributed computing environment can be used to provide ultra-low latency, precise positional awareness and agile applications, which could significantly improve user experience. In order to achieve this, it is necessary to consider next-generation paradigms such as Information-Centric Networking and Cloud Computing, integrated with the upcoming 5th Generation networking access. A cohesive end-to-end architecture is proposed, fully exploiting Information-Centric Networking together with the Mobile Follow-Me Cloud approach, for enhancing the migration of content-caches located at the edge of cloudified mobile networks. The chosen content-relocation algorithm attains content-availability improvements of up to 500 when a mobile user performs a request and compared against other existing solutions. The performed evaluation considers a realistic core-network, with functional and non-functional measurements, including the deployment of the entire system, computation and allocation/migration of resources. The achieved results reveal that the proposed architecture is beneficial not only from the users’ perspective but also from the providers point-of-view, which may be able to optimize their resources and reach significant bandwidth savings.
Resumo:
An on-line survey of experts was conducted to solicit their views on policy priorities in the area of information and communication technologies (ICT) in the Caribbean. The experts considered the goal to “promote teacher training in the use of ICTs in the classroom” to be the highest priority, followed by goals to “reduce the cost of broadband services” and “promote the use of ICT in emergency and disaster prevention, preparedness and response.” Goals in the areas of cybercrime, e-commerce, egovernment, universal service funds, consumer protection, and on-line privacy rounded out the top 10. Some of the lowest ranked goals were those related to coordinating the management of infrastructure changes. These included the switchover for digital terrestrial television (DTT) and digital FM radio, cloud computing for government ICT, the introduction of satellite-based internet services, and the installation of content distribution networks (CDNs). Initiatives aimed at using ICT to promote specific industries, or specific means of promoting the digital economy, tended toward the centre of the rankings. Thus, a general pattern emerged which elevated the importance of focusing on how ICT is integrated into the broader society, with economic issues a lower priority, and concerns about coordination on infrastructure issues lower still.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Creating value from knowledge and knowing: Absorptive capacity or potential exploitation capability?
Resumo:
The New Caledonia ophiolite hosts one of the largest obducted mantle section in the world, hence providing a unique insight for the study of upper mantle processes. These mantle rocks belong to an “atypical” ophiolitic sequence, which is dominated by refractory harzburgites but it also includes minor spinel and plagioclase lherzolites. Upper crust is notably absent in the ophiolite, with the exception of some mafic-ultramafic cumulates cropping out in the southern part of the island. Although the New Caledonia ophiolite has been under investigation for decades, its ultra-depleted nature has made its characterization an analytical challenge, so that few trace element data are available, while isotopic data are completely missing. In this thesis a comprehensive geochemical study (major, trace element and Sr-Nd-Pb isotopes) of the peridotites and the associated intrusive mafic rocks from the New Caledonia ophiolite has been carried out. The peridotites are low-strain tectonites showing porphyroclastic textures. Spinel lherzolites are undepleted lithotypes, as attested by the presence of 7-8 vol% of Na2O and Al2O3-rich clinopyroxene (up to 0.5 wt% Na2O; 6.5 wt% Al2O3), Fo content of olivine (88.5-90.0 mol%) and low Cr# of spinel (13-17). Conversely, harzburgites display a refractory nature, proven by the remarkable absence of primary clinopyroxene, very high Fo content in olivine (90.9-92.9 mol%), high Mg# in orthopyroxene (89.8-94.2) and Cr# in spinel (39-71). REE contents show abyssal-type patterns for spinel lherzolites, while harzburgites display U-shaped patterns, typical of fore-arc settings. Spinel lherzolites REE compositions are consistent with relatively low degree (8-9%) of fractional melting of a DMM source, starting in the garnet stability field. Conversely, REE models for harzburgites indicate high melting degrees (20-25%) of a DMM mantle source under spinel faies conditions, consistent with hydrous melting in forearc setting. Plagioclase lherzolites exhibit melt impregnation microtextures, Cr- and TiO2-enriched spinels and REE, Ti, Y, Zr progressive increase with respect to spinel lherzolites. Impregnation models indicate that plagioclase lherzolites may derive from spinel lherzolites by entrapment of highly depleted MORB melts in the shallow oceanic lithosphere. Mafic intrusives are olivine gabbronorites with a very refractory composition, as attested by high Fo content of olivine (87.3-88.9 mol.%), very high Mg# of clinopyroxene (87.7-92.2) and extreme anorthitic content of plagioclase (An = 90-96 mol%). The high Mg#, low TiO2 concentrations in pyroxenes and the anorthitic composition of plagioclase point out an origin from ultra-depleted primitive magmas in a convergent setting. Geochemical trace element models show that the parental melts of gabbronorites are primitive magmas with striking depleted compositions, bearing only in part similarities with the primitive boninitic melts of Bonin Islands. The first Sr, Nd and Pb isotope data obtained for the New Caledonia ophiolite highlight the presence of DM mantle source variably modified by different processes. Nd-Sr-Pb isotopic ratios for the lherzolites (+6.98≤epsilon Ndi≤+10.97) indicate a DM source that suffered low-temperature hydrothermal reactions. Harzburgites are characterized by a wide variation of Sr, Nd and Pb isotopic values, extending from DM-type to EM2 compositions (-0.82≤ epsilon Ndi≤+17.55), suggesting that harzburgite source was strongly affected by subduction-related processes. Conversely, combined trace element and Sr-Nd-Pb isotopic data for gabbronorites indicate a derivation from a source with composition similar to Indian-type mantle, but affected by fluid input in subduction environment. These geochemical features point out an evolution in a pre-Eocenic marginal basin setting, possibly in the proximity of a transform fault, for the lherzolites. Conversely, the harzburgites acquired their main geochemical and isotopic fingerprint in subduction zone setting.
Resumo:
Technological advancements enable new sourcing models in software development such as cloud computing, software-as-a-service, and crowdsourcing. While the first two are perceived as a re-emergence of older models (e.g., ASP), crowdsourcing is a new model that creates an opportunity for a global workforce to compete with established service providers. Organizations engaging in crowdsourcing need to develop the capabilities to successfully utilize this sourcing model in delivering services to their clients. To explore these capabilities we collected qualitative data from focus groups with crowdsourcing leaders at a large technology organization. New capabilities we identified stem from the need of the traditional service provider to assume a "client" role in the crowdsourcing context, while still acting as a "vendor" in providing services to the end client. This paper expands the research on vendor capabilities and IS outsourcing as well as offers important insights to organizations that are experimenting with, or considering, crowdsourcing.
Resumo:
The world is connected by a core network of long-haul optical communication systems that link countries and continents, enabling long-distance phone calls, data-center communications, and the Internet. The demands on information rates have been constantly driven up by applications such as online gaming, high-definition video, and cloud computing. All over the world, end-user connection speeds are being increased by replacing conventional digital subscriber line (DSL) and asymmetric DSL (ADSL) with fiber to the home. Clearly, the capacity of the core network must also increase proportionally. © 1991-2012 IEEE.
Resumo:
Volunteered Service Composition (VSC) refers to the process of composing volunteered services and resources. These services are typically published to a pool of voluntary resources. The composition aims at satisfying some objectives (e.g. Utilizing storage and eliminating waste, sharing space and optimizing for energy, reducing computational cost etc.). In cases when a single volunteered service does not satisfy a request, VSC will be required. In this paper, we contribute to three approaches for composing volunteered services: these are exhaustive, naïve and utility-based search approach to VSC. The proposed new utility-based approach, for instance, is based on measuring the utility that each volunteered service can provide to each request and systematically selects the one with the highest utility. We found that the utility-based approach tend to be more effective and efficient when selecting services, while minimizing resource waste when compared to the other two approaches.
Resumo:
ACM Computing Classification System (1998): H3.3, H.5.5, J5.
Resumo:
Continuous progress in optical communication technology and corresponding increasing data rates in core fiber communication systems are stimulated by the evergrowing capacity demand due to constantly emerging new bandwidth-hungry services like cloud computing, ultra-high-definition video streams, etc. This demand is pushing the required capacity of optical communication lines close to the theoretical limit of a standard single-mode fiber, which is imposed by Kerr nonlinearity [1–4]. In recent years, there have been extensive efforts in mitigating the detrimental impact of fiber nonlinearity on signal transmission, through various compensation techniques. However, there are still many challenges in applying these methods, because a majority of technologies utilized in the inherently nonlinear fiber communication systems had been originally developed for linear communication channels. Thereby, the application of ”linear techniques” in a fiber communication systems is inevitably limited by the nonlinear properties of the fiber medium. The quest for the optimal design of a nonlinear transmission channels, development of nonlinear communication technqiues and the usage of nonlinearity in a“constructive” way have occupied researchers for quite a long time.
Resumo:
The increasing needs for computational power in areas such as weather simulation, genomics or Internet applications have led to sharing of geographically distributed and heterogeneous resources from commercial data centers and scientific institutions. Research in the areas of utility, grid and cloud computing, together with improvements in network and hardware virtualization has resulted in methods to locate and use resources to rapidly provision virtual environments in a flexible manner, while lowering costs for consumers and providers. ^ However, there is still a lack of methodologies to enable efficient and seamless sharing of resources among institutions. In this work, we concentrate in the problem of executing parallel scientific applications across distributed resources belonging to separate organizations. Our approach can be divided in three main points. First, we define and implement an interoperable grid protocol to distribute job workloads among partners with different middleware and execution resources. Second, we research and implement different policies for virtual resource provisioning and job-to-resource allocation, taking advantage of their cooperation to improve execution cost and performance. Third, we explore the consequences of on-demand provisioning and allocation in the problem of site-selection for the execution of parallel workloads, and propose new strategies to reduce job slowdown and overall cost.^
Resumo:
Florida International University Commencement Ceremony May 2,2011 at US Century Bank Arena ( Session 3) Colleges graduated: College of Engineering and Computing College of Business Administration – Chapman Graduate School of Business only –