37 resultados para Cloud computing, OpenNebula, sincronizzazione, replica, wide area network
Resumo:
Cost-efficient operation while satisfying performance and availability guarantees in Service Level Agreements (SLAs) is a challenge for Cloud Computing, as these are potentially conflicting objectives. We present a framework for SLA management based on multi-objective optimization. The framework features a forecasting model for determining the best virtual machine-to-host allocation given the need to minimize SLA violations, energy consumption and resource wasting. A comprehensive SLA management solution is proposed that uses event processing for monitoring and enables dynamic provisioning of virtual machines onto the physical infrastructure. We validated our implementation against serveral standard heuristics and were able to show that our approach is significantly better.
Resumo:
Abstract Cloud computing service emerged as an essential component of the Enterprise {IT} infrastructure. Migration towards a full range and large-scale convergence of Cloud and network services has become the current trend for addressing requirements of the Cloud environment. Our approach takes the infrastructure as a service paradigm to build converged virtual infrastructures, which allow offering tailored performance and enable multi-tenancy over a common physical infrastructure. Thanks to virtualization, new exploitation activities of the physical infrastructures may arise for both transport network and Data Centres services. This approach makes network and Data Centres’ resources dedicated to Cloud Computing to converge on the same flexible and scalable level. The work presented here is based on the automation of the virtual infrastructure provisioning service. On top of the virtual infrastructures, a coordinated operation and control of the different resources is performed with the objective of automatically tailoring connectivity services to the Cloud service dynamics. Furthermore, in order to support elasticity of the Cloud services through the optical network, dynamic re-planning features have been provided to the virtual infrastructure service, which allows scaling up or down existing virtual infrastructures to optimize resource utilisation and dynamically adapt to users’ demands. Thus, the dynamic re-planning of the service becomes key component for the coordination of Cloud and optical network resource in an optimal way in terms of resource utilisation. The presented work is complemented with a use case of the virtual infrastructure service being adopted in a distributed Enterprise Information System, that scales up and down as a function of the application requests.
Resumo:
Cloud Computing is an enabler for delivering large-scale, distributed enterprise applications with strict requirements in terms of performance. It is often the case that such applications have complex scaling and Service Level Agreement (SLA) management requirements. In this paper we present a simulation approach for validating and comparing SLA-aware scaling policies using the CloudSim simulator, using data from an actual Distributed Enterprise Information System (dEIS). We extend CloudSim with concurrent and multi-tenant task simulation capabilities. We then show how different scaling policies can be used for simulating multiple dEIS applications. We present multiple experiments depicting the impact of VM scaling on both datacenter energy consumption and dEIS performance indicators.
Resumo:
Long Term Evolution (LTE) represents the fourth generation (4G) technology which is capable of providing high data rates as well as support of high speed mobility. The EU FP7 Mobile Cloud Networking (MCN) project integrates the use of cloud computing concepts in LTE mobile networks in order to increase LTE's performance. In this way a shared distributed virtualized LTE mobile network is built that can optimize the utilization of virtualized computing, storage and network resources and minimize communication delays. Two important features that can be used in such a virtualized system to improve its performance are the user mobility and bandwidth prediction. This paper introduces the architecture and challenges that are associated with user mobility and bandwidth prediction approaches in virtualized LTE systems.
Resumo:
Mobile networks usage rapidly increased over the years, with great consequences in terms of performance requirements. In this paper, we propose mechanisms to use Information-Centric Networking to perform load balancing in mobile networks, providing content delivery over multiple radio technologies at the same time and thus efficiently using resources and improving the overall performance of content transfer. Meaningful results were obtained by comparing content transfer over single radio links with typical strategies to content transfer over multiple radio links with Information-Centric Networking load balancing. Results demonstrate that Information-Centric Networking load balancing increases the performance and efficiency of 3GPP Long Term Evolution mobile networks while greatly improving the network perceived quality for end users.
Resumo:
Recently telecommunication industry benefits from infrastructure sharing, one of the most fundamental enablers of cloud computing, leading to emergence of the Mobile Virtual Network Operator (MVNO) concept. The most momentous intents by this approach are the support of on-demand provisioning and elasticity of virtualized mobile network components, based on data traffic load. To realize it, during operation and management procedures, the virtualized services need be triggered in order to scale-up/down or scale-out/in an instance. In this paper we propose an architecture called MOBaaS (Mobility and Bandwidth Availability Prediction as a Service), comprising two algorithms in order to predict user(s) mobility and network link bandwidth availability, that can be implemented in cloud based mobile network structure and can be used as a support service by any other virtualized mobile network services. MOBaaS can provide prediction information in order to generate required triggers for on-demand deploying, provisioning, disposing of virtualized network components. This information can be used for self-adaptation procedures and optimal network function configuration during run-time operation, as well. Through the preliminary experiments with the prototype implementation on the OpenStack platform, we evaluated and confirmed the feasibility and the effectiveness of the prediction algorithms and the proposed architecture.
Resumo:
The evolution of wireless access technologies and mobile devices, together with the constant demand for video services, has created new Human-Centric Multimedia Networking (HCMN) scenarios. However, HCMN poses several challenges for content creators and network providers to deliver multimedia data with an acceptable quality level based on the user experience. Moreover, human experience and context, as well as network information play an important role in adapting and optimizing video dissemination. In this paper, we discuss trends to provide video dissemination with Quality of Experience (QoE) support by integrating HCMN with cloud computing approaches. We identified five trends coming from such integration, namely Participatory Sensor Networks, Mobile Cloud Computing formation, QoE assessment, QoE management, and video or network adaptation.
Resumo:
Abstract Mobile Edge Computing enables the deployment of services, applications, content storage and processing in close proximity to mobile end users. This highly distributed computing environment can be used to provide ultra-low latency, precise positional awareness and agile applications, which could significantly improve user experience. In order to achieve this, it is necessary to consider next-generation paradigms such as Information-Centric Networking and Cloud Computing, integrated with the upcoming 5th Generation networking access. A cohesive end-to-end architecture is proposed, fully exploiting Information-Centric Networking together with the Mobile Follow-Me Cloud approach, for enhancing the migration of content-caches located at the edge of cloudified mobile networks. The chosen content-relocation algorithm attains content-availability improvements of up to 500 when a mobile user performs a request and compared against other existing solutions. The performed evaluation considers a realistic core-network, with functional and non-functional measurements, including the deployment of the entire system, computation and allocation/migration of resources. The achieved results reveal that the proposed architecture is beneficial not only from the users’ perspective but also from the providers point-of-view, which may be able to optimize their resources and reach significant bandwidth savings.
Resumo:
OBJECTIVE: A previous study of radiofrequency neurotomy of the articular branches of the obturator nerve for hip joint pain produced modest results. Based on an anatomical and radiological study, we sought to define a potentially more effective radiofrequency method. DESIGN: Ten cadavers were studied, four of them bilaterally. The obturator nerve and its articular branches were marked by wires. Their radiological relationship to the bone structures on fluoroscopy was imaged and analyzed. A magnetic resonance imaging (MRI) study was undertaken on 20 patients to determine the structures that would be encountered by the radiofrequency electrode during different possible percutaneous approaches. RESULTS: The articular branches of the obturator nerve vary in location over a wide area. The previously described method of denervating the hip joint did not take this variation into account. Moreover, it approached the nerves perpendicularly. Because optimal coagulation requires electrodes to lie parallel to the nerves, a perpendicular approach probably produced only a minimal lesion. In addition, MRI demonstrated that a perpendicular approach is likely to puncture femoral vessels. Vessel puncture can be avoided if an oblique pass is used. Such an approach minimizes the angle between the target nerves and the electrode, and increases the likelihood of the nerve being captured by the lesion made. Multiple lesions need to be made in order to accommodate the variability in location of the articular nerves. CONCLUSIONS: The method that we described has the potential to produce complete and reliable nerve coagulation. Moreover, it minimizes the risk of penetrating the great vessels. The efficacy of this approach should be tested in clinical trials.
Resumo:
Current advanced cloud infrastructure management solutions allow scheduling actions for dynamically changing the number of running virtual machines (VMs). This approach, however, does not guarantee that the scheduled number of VMs will properly handle the actual user generated workload, especially if the user utilization patterns will change. We propose using a dynamically generated scaling model for the VMs containing the services of the distributed applications, which is able to react to the variations in the number of application users. We answer the following question: How to dynamically decide how many services of each type are needed in order to handle a larger workload within the same time constraints? We describe a mechanism for dynamically composing the SLAs for controlling the scaling of distributed services by combining data analysis mechanisms with application benchmarking using multiple VM configurations. Based on processing of multiple application benchmarks generated data sets we discover a set of service monitoring metrics able to predict critical Service Level Agreement (SLA) parameters. By combining this set of predictor metrics with a heuristic for selecting the appropriate scaling-out paths for the services of distributed applications, we show how SLA scaling rules can be inferred and then used for controlling the runtime scale-in and scale-out of distributed services. We validate our architecture and models by performing scaling experiments with a distributed application representative for the enterprise class of information systems. We show how dynamically generated SLAs can be successfully used for controlling the management of distributed services scaling.
Resumo:
Morphometric investigations using a point and intersection counting strategy in the lung often are not able to reveal the full set of morphologic changes. This happens particularly when structural modifications are not expressed in terms of volume density changes and when rough and fine surface density alterations cancel each other at different magnifications. Making use of digital image processing, we present a methodological approach that allows to easily and quickly quantify changes of the geometrical properties of the parenchymal lung structure and reflects closely the visual appreciation of the changes. Randomly sampled digital images from light microscopic sections of lung parenchyma are filtered, binarized, and skeletonized. The lung septa are thus represented as a single-pixel wide line network with nodal points and end points and the corresponding internodal and end segments. By automatically counting the number of points and measuring the lengths of the skeletal segments, the lung architecture can be characterized and very subtle structural changes can be detected. This new methodological approach to lung structure analysis is highly sensitive to morphological changes in the parenchyma: it detected highly significant quantitative alterations in the structure of lungs of rats treated with a glucocorticoid hormone, where the classical morphometry had partly failed.
Resumo:
In order to determine the extent and timing of dyke formation in the Ladakh Batholith we examined about 30 mostly andesitic dykes intruding the Ladakh batholith in a ca. 50 km wide area to the west of Leh (NW India). The dykes in the east of the area trend E-NE and those in the west trend N-NW. The difference in orientation is also evident in the petrography and isotopic signatures. The eastern dykes contain corroded quartz xenocrysts and show negative ε0(Nd) and positive ε0(Sr) values, where as the western dykes do not contain quartz xenocrysts and exhibit positive ε0(Nd) and near-zero ε0(Sr) values. The variability in Sr-Nd isotopes (ε0(Nd) = 3.6 to −9.6, ε0(Sr) = 0.4 to 143) and the quartz xenocrysts can best be explained by (differing degrees of) crustal assimilation of the parent magma of the dykes. Separated minerals from five dykes were dated by 40Ar-39Ar incremental heating: amphibole ages range between 50 and 54 Ma, and one biotite dated both by Rb-Sr and by 40Ar-39Ar gave an age of 45 Ma. One dated pseudotachylyte sample attests to brittle faulting at ca. 54 Ma. The combination of structural field evidence with petrographic, isotopic and geochronological analyses demonstrates that the dykes did not form from a single, progressively differentiating magma chamber, despite having formed in the same tectonic setting around the same time, and that processes such as crustal assimilation and magma mixing/mingling also played a significant role in magma petrogenesis.
Resumo:
The aim of this study was to describe the clinical and PSG characteristics of narcolepsy with cataplexy and their genetic predisposition by using the retrospective patient database of the European Narcolepsy Network (EU-NN). We have analysed retrospective data of 1099 patients with narcolepsy diagnosed according to International Classification of Sleep Disorders-2. Demographic and clinical characteristics, polysomnography and multiple sleep latency test data, hypocretin-1 levels, and genome-wide genotypes were available. We found a significantly lower age at sleepiness onset (men versus women: 23.74 ± 12.43 versus 21.49 ± 11.83, P = 0.003) and longer diagnostic delay in women (men versus women: 13.82 ± 13.79 versus 15.62 ± 14.94, P = 0.044). The mean diagnostic delay was 14.63 ± 14.31 years, and longer delay was associated with higher body mass index. The best predictors of short diagnostic delay were young age at diagnosis, cataplexy as the first symptom and higher frequency of cataplexy attacks. The mean multiple sleep latency negatively correlated with Epworth Sleepiness Scale (ESS) and with the number of sleep-onset rapid eye movement periods (SOREMPs), but none of the polysomnographic variables was associated with subjective or objective measures of sleepiness. Variant rs2859998 in UBXN2B gene showed a strong association (P = 1.28E-07) with the age at onset of excessive daytime sleepiness, and rs12425451 near the transcription factor TEAD4 (P = 1.97E-07) with the age at onset of cataplexy. Altogether, our results indicate that the diagnostic delay remains extremely long, age and gender substantially affect symptoms, and that a genetic predisposition affects the age at onset of symptoms.