958 resultados para iaas, anonymous cloud, p2p, anonymizing network, gossip


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to the advancement in mobile devices and wireless networks mobile cloud computing, which combines mobile computing and cloud computing has gained momentum since 2009. The characteristics of mobile devices and wireless network makes the implementation of mobile cloud computing more complicated than for fixed clouds. This section lists some of the major issues in Mobile Cloud Computing. One of the key issues in mobile cloud computing is the end to end delay in servicing a request. Data caching is one of the techniques widely used in wired and wireless networks to improve data access efficiency. In this paper we explore the possibility of a cooperative caching approach to enhance data access efficiency in mobile cloud computing. The proposed approach is based on cloudlets, one of the architecture designed for mobile cloud computing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hoy en día el uso de las TIC en una empresa se ha convertido en indispensable sin importar su tamaño o industria a tal punto que ya no es una ventaja competitiva el simple hecho de usarla sino que la clave es determinar cuál es la mejor para la empresa. Existen en el mercado servicios en-línea que igualan o sobrepasan los estándares de tecnologías pagadas que incluso deben ser instaladas dentro de la empresa consumiendo recursos y en algunos casos subutilizándolos, los servicios del cloud computing poseen la flexibilidad y capacidad que la empresa requiere de acuerdo a sus necesidades de crecimiento y de cambio continuo que el mercado exige en la actualidad. Algunas empresas no invierten en dichas tecnologías porque no saben que pueden accederlas a costos bajos e incluso en algunos casos sin pagar un solo centavo y pierden competitividad y productividad por simple desconocimiento. El uso de servicios SaaS e IaaS afecta al flujo de caja de manera positiva comparada con una inversión en infraestructura propia, sobre todo en empresas que están empezando a funcionar o en aquellas que están en proceso de renovación tecnológica. Después de realizada esta investigación se recomienda el uso de los servicios de cloud computing, pero es evidente que no son útiles para todas las empresas y la utilidad depende de la criticidad de las aplicaciones o infraestructuras, el momento por el que pasa la compañía, tamaño de la empresa, ¿Cloud pública o Cloud privada?, en ocasiones, virtualizar también es racionalizar y la visión integral de la seguridad. Por otro lado, después de tabular los datos obtenidos en la encuesta número 2 se evidenció que el lanzamiento del Servicio en la nube de EcuFlow incrementó la demanda de esta herramienta en el mercado.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Locality to other nodes on a peer-to-peer overlay network can be established by means of a set of landmarks shared among the participating nodes. Each node independently collects a set of latency measures to landmark nodes, which are used as a multi-dimensional feature vector. Each peer node uses the feature vector to generate a unique scalar index which is correlated to its topological locality. A popular dimensionality reduction technique is the space filling Hilbert’s curve, as it possesses good locality preserving properties. However, there exists little comparison between Hilbert’s curve and other techniques for dimensionality reduction. This work carries out a quantitative analysis of their properties. Linear and non-linear techniques for scaling the landmark vectors to a single dimension are investigated. Hilbert’s curve, Sammon’s mapping and Principal Component Analysis have been used to generate a 1d space with locality preserving properties. This work provides empirical evidence to support the use of Hilbert’s curve in the context of locality preservation when generating peer identifiers by means of landmark vector analysis. A comparative analysis is carried out with an artificial 2d network model and with a realistic network topology model with a typical power-law distribution of node connectivity in the Internet. Nearest neighbour analysis confirms Hilbert’s curve to be very effective in both artificial and realistic network topologies. Nevertheless, the results in the realistic network model show that there is scope for improvements and better techniques to preserve locality information are required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper evaluates the relationship between the cloud modification factor (CMF) in the ultraviolet erythe- mal range and the cloud optical depth (COD) retrieved from the Aerosol Robotic Network (AERONET) "cloud mode" algorithm under overcast cloudy conditions (confirmed with sky images) at Granada, Spain, mainly for non-precipitating, overcast and relatively homogenous water clouds. Empirical CMF showed a clear exponential dependence on experimental COD values, decreasing approximately from 0.7 for COD=10 to 0.25 for COD=50. In addition, these COD measurements were used as input in the LibRadtran radia tive transfer code allowing the simulation of CMF values for the selected overcast cases. The modeled CMF exhibited a dependence on COD similar to the empirical CMF, but modeled values present a strong underestimation with respect to the empirical factors (mean bias of 22 %). To explain this high bias, an exhaustive comparison between modeled and experimental UV erythemal irradiance (UVER) data was performed. The comparison revealed that the radiative transfer simulations were 8 % higher than the observations for clear-sky conditions. The rest of the bias (~14 %) may be attributed to the substantial underestimation of modeled UVER with respect to experimental UVER under overcast conditions, although the correlation between both dataset was high (R2 ~ 0.93). A sensitive test showed that the main reason responsible for that underestimation is the experimental AERONET COD used as input in the simulations, which has been retrieved from zenith radiances in the visible range. In this sense, effective COD in the erythemal interval were derived from an iteration procedure based on searching the best match between modeled and experimental UVER values for each selected overcast case. These effective COD values were smaller than AERONET COD data in about 80 % of the overcast cases with a mean relative difference of 22 %.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud computing is usually regarded as being energy efficient and thus emitting less greenhouse gases (GHG) than traditional forms of computing. When the energy consumption of Microsoft’s cloud computing Office 365 (O365) and traditional Office 2010 (O2010) software suites were tested and modeled, some cloud services were found to consume more energy than the traditional form. The developed model in this research took into consideration the energy consumption at the three main stages of data transmission; data center, network, and end user device. Comparable products from each suite were selected and activities were defined for each product to represent a different computing type. Microsoft provided highly confidential data for the data center stage, while the networking and user device stages were measured directly. A new measurement and software apportionment approach was defined and utilized allowing the power consumption of cloud services to be directly measured for the user device stage. Results indicated that cloud computing is more energy efficient for Excel and Outlook which consumed less energy and emitted less GHG than the standalone counterpart. The power consumption of the cloud based Outlook (8%) and Excel (17%) was lower than their traditional counterparts. However, the power consumption of the cloud version of Word was 17% higher than its traditional equivalent. A third mixed access method was also measured for Word which emitted 5% more GHG than the traditional version. It is evident that cloud computing may not provide a unified way forward to reduce energy consumption and GHG. Direct conversion from the standalone package into the cloud provision platform can now consider energy and GHG emissions at the software development and cloud service design stage using the methods described in this research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Body area networks (BANs) are emerging as enabling technology for many human-centered application domains such as health-care, sport, fitness, wellness, ergonomics, emergency, safety, security, and sociality. A BAN, which basically consists of wireless wearable sensor nodes usually coordinated by a static or mobile device, is mainly exploited to monitor single assisted livings. Data generated by a BAN can be processed in real-time by the BAN coordinator and/or transmitted to a server-side for online/offline processing and long-term storing. A network of BANs worn by a community of people produces large amount of contextual data that require a scalable and efficient approach for elaboration and storage. Cloud computing can provide a flexible storage and processing infrastructure to perform both online and offline analysis of body sensor data streams. In this paper, we motivate the introduction of Cloud-assisted BANs along with the main challenges that need to be addressed for their development and management. The current state-of-the-art is overviewed and framed according to the main requirements for effective Cloud-assisted BAN architectures. Finally, relevant open research issues in terms of efficiency, scalability, security, interoperability, prototyping, dynamic deployment and management, are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Finnish Meteorological Institute, in collaboration with the University of Helsinki, has established a new ground-based remote-sensing network in Finland. The network consists of five topographically, ecologically and climatically different sites distributed from southern to northern Finland. The main goal of the network is to monitor air pollution and boundary layer properties in near real time, with a Doppler lidar and ceilometer at each site. In addition to these operational tasks, two sites are members of the Aerosols, Clouds and Trace gases Research InfraStructure Network (ACTRIS); a Ka band cloud radar at Sodankylä will provide cloud retrievals within CloudNet, and a multi-wavelength Raman lidar, PollyXT (POrtabLe Lidar sYstem eXTended), in Kuopio provides optical and microphysical aerosol properties through EARLINET (the European Aerosol Research Lidar Network). Three C-band weather radars are located in the Helsinki metropolitan area and are deployed for operational and research applications. We performed two inter-comparison campaigns to investigate the Doppler lidar performance, compare the backscatter signal and wind profiles, and to optimize the lidar sensitivity through adjusting the telescope focus length and data-integration time to ensure sufficient signal-to-noise ratio (SNR) in low-aerosol-content environments. In terms of statistical characterization, the wind-profile comparison showed good agreement between different lidars. Initially, there was a discrepancy in the SNR and attenuated backscatter coefficient profiles which arose from an incorrectly reported telescope focus setting from one instrument, together with the need to calibrate. After diagnosing the true telescope focus length, calculating a new attenuated backscatter coefficient profile with the new telescope function and taking into account calibration, the resulting attenuated backscatter profiles all showed good agreement with each other. It was thought that harsh Finnish winters could pose problems, but, due to the built-in heating systems, low ambient temperatures had no, or only a minor, impact on the lidar operation – including scanning-head motion. However, accumulation of snow and ice on the lens has been observed, which can lead to the formation of a water/ice layer thus attenuating the signal inconsistently. Thus, care must be taken to ensure continuous snow removal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the emerging prevalence of smart phones and 4G LTE networks, the demand for faster-better-cheaper mobile services anytime and anywhere is ever growing. The Dynamic Network Optimization (DNO) concept emerged as a solution that optimally and continuously tunes the network settings, in response to varying network conditions and subscriber needs. Yet, the DNO realization is still at infancy, largely hindered by the bottleneck of the lengthy optimization runtime. This paper presents the design and prototype of a novel cloud based parallel solution that further enhances the scalability of our prior work on various parallel solutions that accelerate network optimization algorithms. The solution aims to satisfy the high performance required by DNO, preliminarily on a sub-hourly basis. The paper subsequently visualizes a design and a full cycle of a DNO system. A set of potential solutions to large network and real-time DNO are also proposed. Overall, this work creates a breakthrough towards the realization of DNO.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The simulated annealing approach to crystal structure determination from powder diffraction data, as implemented in the DASH program, is readily amenable to parallelization at the individual run level. Very large scale increases in speed of execution can be achieved by distributing individual DASH runs over a network of computers. The CDASH program delivers this by using scalable on-demand computing clusters built on the Amazon Elastic Compute Cloud service. By way of example, a 360 vCPU cluster returned the crystal structure of racemic ornidazole (Z0 = 3, 30 degrees of freedom) ca 40 times faster than a typical modern quad-core desktop CPU. Whilst used here specifically for DASH, this approach is of general applicability to other packages that are amenable to coarse-grained parallelism strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud computing innebär användning av datorresurser som är tillgängliga via ett nätverk, oftast Internet och är ett område som har vuxit fram i snabb takt under de senaste åren. Allt fler företag migrerar hela eller delar av sin verksamhet till molnet. Sogeti i Borlänge har behov av att migrera sina utvecklingsmiljöer till en molntjänst då drift och underhåll av dessa är kostsamma och tidsödande. Som Microsoftpartners vill Sogeti använda Microsoft tjänst för cloud computing, Windows Azure, för detta syfte. Migration till molnet är ett nytt område för Sogeti och de har inga beskrivningar för hur en sådan process går till. Vårt uppdrag var att utveckla ett tillvägagångssätt för migration av en IT-lösning till molnet. En del av uppdraget blev då att kartlägga cloud computing, dess beståndsdelar samt vilka för- och nackdelar som finns, vilket har gjort att vi har fått grundläggande kunskap i ämnet. För att utveckla ett tillvägagångssätt för migration har vi utfört flera migrationer av virtuella maskiner till Windows Azure och utifrån dessa migrationer, litteraturstudier och intervjuer dragit slutsatser som mynnat ut i ett generellt tillvägagångssätt för migration till molnet. Resultatet har visat att det är svårt att göra en generell men samtidigt detaljerad beskrivning över ett tillvägagångssätt för migration, då scenariot ser olika ut beroende på vad som ska migreras och vilken typ av molntjänst som används. Vi har dock utifrån våra erfarenheter från våra migrationer, tillsammans med litteraturstudier, dokumentstudier och intervjuer lyft vår kunskap till en generell nivå. Från denna kunskap har vi sammanställt ett generellt tillvägagångssätt med större fokus på de förberedande aktiviteter som en organisation bör genomföra innan migration. Våra studier har även resulterat i en fördjupad beskrivning av cloud computing. I vår studie har vi inte sett att någon tidigare har beskrivit kritiska framgångsfaktorer i samband med cloud computing. I vårt empiriska arbete har vi dock identifierat tre kritiska framgångsfaktorer för cloud computing och i och med detta täckt upp en del av kunskapsgapet där emellan.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are some approaches that take advantage of unused computational resources in the Internet nodes - users´ machines. In the last years , the peer-to-peer networks (P2P) have gaining a momentum mainly due to its support for scalability and fault tolerance. However, current P2P architectures present some problems such as nodes overhead due to messages routing, a great amount of nodes reconfigurations when the network topology changes, routing traffic inside a specific network even when the traffic is not directed to a machine of this network, and the lack of a proximity relationship among the P2P nodes and the proximity of these nodes in the IP network. Although some architectures use the information about the nodes distance in the IP network, they use methods that require dynamic information. In this work we propose a P2P architecture to fix the problems afore mentioned. It is composed of three parts. The first part consists of a basic P2P architecture, called SGrid, which maintains a relationship of nodes in the P2P network with their position in the IP network. Its assigns adjacent key regions to nodes of a same organization. The second part is a protocol called NATal (Routing and NAT application layer) that extends the basic architecture in order to remove from the nodes the responsibility of routing messages. The third part consists of a special kind of node, called LSP (Lightware Super-Peer), which is responsible for maintaining the P2P routing table. In addition, this work also presents a simulator that validates the architecture and a module of the Natal protocol to be used in Linux routers

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous works have studied the characteristics and peculiarities of P2P networks, especially security information aspects. Most works, in some way, deal with the sharing of resources and, in particular, the storage of files. This work complements previous studies and adds new definitions relating to this kind of systems. A system for safe storage of files (SAS-P2P) was specified and built, based on P2P technology, using the JXTA platform. This system uses standard X.509 and PKCS # 12 digital certificates, issued and managed by a public key infrastructure, which was also specified and developed based on P2P technology (PKIX-P2P). The information is stored in a special file with XML format which is especially prepared, facilitating handling and interoperability among applications. The intention of developing the SAS-P2P system was to offer a complementary service for Giga Natal network users, through which the participants in this network can collaboratively build a shared storage area, with important security features such as availability, confidentiality, authenticity and fault tolerance. Besides the specification, development of prototypes and testing of the SAS-P2P system, tests of the PKIX-P2P Manager module were also performed, in order to determine its fault tolerance and the effective calculation of the reputation of the certifying authorities participating in the system

Relevância:

30.00% 30.00%

Publicador:

Resumo:

dIn this work, a perceptron neural-network technique is applied to estimate hourly values of the diffuse solar-radiation at the surface in São Paulo City, Brazil, using as input the global solar-radiation and other meteorological parameters measured from 1998 to 2001. The neural-network verification was performed using the hourly measurements of diffuse solar-radiation obtained during the year 2002. The neural network was developed based on both feature determination and pattern selection techniques. It was found that the inclusion of the atmospheric long-wave radiation as input improves the neural-network performance. on the other hand traditional meteorological parameters, like air temperature and atmospheric pressure, are not as important as long-wave radiation which acts as a surrogate for cloud-cover information on the regional scale. An objective evaluation has shown that the diffuse solar-radiation is better reproduced by neural network synthetic series than by a correlation model. (C) 2004 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes lightning characteristics as obtained in four sets of lightning measurements during recent field campaigns in different parts of the world from mid-latitudes to the tropics by the novel VLF/LF (very low frequency/low frequency) lightning detection network (LINET). The paper gives a general overview on the approach, and a synopsis of the statistical results for the observation periods as a whole and for one special day in each region. The focus is on the characteristics of lightning which can specifically be observed by this system like intra-cloud and cloud-to-ground stroke statistics, vertical distributions of intra-cloud strokes or peak current distributions. Some conclusions regarding lightning produced NOx are also presented as this was one of the aims of the tropical field campaigns TROCCINOX (Tropical Convection, Cirrus and Nitrogen Oxides Experiment) and TroCCiBras (Tropical Convection and Cirrus Experiment Brazil) in Brazil during January/February 2005, SCOUT-O3 (Stratospheric-Climate Links with Emphasis on the Upper Troposphere and Lower Stratosphere) and TWP-ICE (Tropical Warm Pool-International Cloud Experiment) during November/December 2005 and January/February 2006, respectively, in the Darwin area in N-Australia, and of AMMA (African Monsoon Multidisciplinary Analyses) in W-Africa during June-November 2006.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Until mid 2006, SCIAMACHY data processors for the operational retrieval of nitrogen dioxide (NO2) column data were based on the historical version 2 of the GOME Data Processor (GDP). On top of known problems inherent to GDP 2, ground-based validations of SCIAMACHY NO2 data revealed issues specific to SCIAMACHY, like a large cloud-dependent offset occurring at Northern latitudes. In 2006, the GDOAS prototype algorithm of the improved GDP version 4 was transferred to the off-line SCIAMACHY Ground Processor (SGP) version 3.0. In parallel, the calibration of SCIAMACHY radiometric data was upgraded. Before operational switch-on of SGP 3.0 and public release of upgraded SCIAMACHY NO2 data, we have investigated the accuracy of the algorithm transfer: (a) by checking the consistency of SGP 3.0 with prototype algorithms; and (b) by comparing SGP 3.0 NO2 data with ground-based observations reported by the WMO/GAW NDACC network of UV-visible DOAS/SAOZ spectrometers. This delta-validation study concludes that SGP 3.0 is a significant improvement with respect to the previous processor IPF 5.04. For three particular SCIAMACHY states, the study reveals unexplained features in the slant columns and air mass factors, although the quantitative impact on SGP 3.0 vertical columns is not significant.