26 resultados para cloud-based applications
Resumo:
Advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing workload conditions, such as number of connected users, application performance might suffer, leading to violations of Service Level Agreements (SLA) and possible inefficient use of hardware resources. Combining dynamic application requirements with the increased use of virtualised computing resources creates a challenging resource Management context for application and cloud-infrastructure owners. In such complex environments, business entities use SLAs as a means for specifying quantitative and qualitative requirements of services. There are several challenges in running distributed enterprise applications in cloud environments, ranging from the instantiation of service VMs in the correct order using an adequate quantity of computing resources, to adapting the number of running services in response to varying external loads, such as number of users. The application owner is interested in finding the optimum amount of computing and network resources to use for ensuring that the performance requirements of all her/his applications are met. She/he is also interested in appropriately scaling the distributed services so that application performance guarantees are maintained even under dynamic workload conditions. Similarly, the infrastructure Providers are interested in optimally provisioning the virtual resources onto the available physical infrastructure so that her/his operational costs are minimized, while maximizing the performance of tenants’ applications. Motivated by the complexities associated with the management and scaling of distributed applications, while satisfying multiple objectives (related to both consumers and providers of cloud resources), this thesis proposes a cloud resource management platform able to dynamically provision and coordinate the various lifecycle actions on both virtual and physical cloud resources using semantically enriched SLAs. The system focuses on dynamic sizing (scaling) of virtual infrastructures composed of virtual machines (VM) bounded application services. We describe several algorithms for adapting the number of VMs allocated to the distributed application in response to changing workload conditions, based on SLA-defined performance guarantees. We also present a framework for dynamic composition of scaling rules for distributed service, which used benchmark-generated application Monitoring traces. We show how these scaling rules can be combined and included into semantic SLAs for controlling allocation of services. We also provide a detailed description of the multi-objective infrastructure resource allocation problem and various approaches to satisfying this problem. We present a resource management system based on a genetic algorithm, which performs allocation of virtual resources, while considering the optimization of multiple criteria. We prove that our approach significantly outperforms reactive VM-scaling algorithms as well as heuristic-based VM-allocation approaches.
Resumo:
Retinal degenerative diseases that target photoreceptors or the adjacent retinal pigment epithelium (RPE) affect millions of people worldwide. Retinal degeneration (RD) is found in many different forms of retinal diseases including retinitis pigmentosa (RP), age-related macular degeneration (AMD), diabetic retinopathy, cataracts, and glaucoma. Effective treatment for retinal degeneration has been widely investigated. Gene-replacement therapy has been shown to improve visual function in inherited retinal disease. However, this treatment was less effective with advanced disease. Stem cell-based therapy is being pursued as a potential alternative approach in the treatment of retinal degenerative diseases. In this review, we will focus on stem cell-based therapies in the pipeline and summarize progress in treatment of retinal degenerative disease.
Resumo:
The evolution of the Next Generation Networks, especially the wireless broadband access technologies such as Long Term Evolution (LTE) and Worldwide Interoperability for Microwave Access (WiMAX), have increased the number of "all-IP" networks across the world. The enhanced capabilities of these access networks has spearheaded the cloud computing paradigm, where the end-users aim at having the services accessible anytime and anywhere. The services availability is also related with the end-user device, where one of the major constraints is the battery lifetime. Therefore, it is necessary to assess and minimize the energy consumed by the end-user devices, given its significance for the user perceived quality of the cloud computing services. In this paper, an empirical methodology to measure network interfaces energy consumption is proposed. By employing this methodology, an experimental evaluation of energy consumption in three different cloud computing access scenarios (including WiMAX) were performed. The empirical results obtained show the impact of accurate network interface states management and application network level design in the energy consumption. Additionally, the achieved outcomes can be used in further software-based models to optimized energy consumption, and increase the Quality of Experience (QoE) perceived by the end-users.
Resumo:
Current advanced cloud infrastructure management solutions allow scheduling actions for dynamically changing the number of running virtual machines (VMs). This approach, however, does not guarantee that the scheduled number of VMs will properly handle the actual user generated workload, especially if the user utilization patterns will change. We propose using a dynamically generated scaling model for the VMs containing the services of the distributed applications, which is able to react to the variations in the number of application users. We answer the following question: How to dynamically decide how many services of each type are needed in order to handle a larger workload within the same time constraints? We describe a mechanism for dynamically composing the SLAs for controlling the scaling of distributed services by combining data analysis mechanisms with application benchmarking using multiple VM configurations. Based on processing of multiple application benchmarks generated data sets we discover a set of service monitoring metrics able to predict critical Service Level Agreement (SLA) parameters. By combining this set of predictor metrics with a heuristic for selecting the appropriate scaling-out paths for the services of distributed applications, we show how SLA scaling rules can be inferred and then used for controlling the runtime scale-in and scale-out of distributed services. We validate our architecture and models by performing scaling experiments with a distributed application representative for the enterprise class of information systems. We show how dynamically generated SLAs can be successfully used for controlling the management of distributed services scaling.
Resumo:
Recent advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing environmental conditions and number of users, application performance might suffer, leading to Service Level Agreement (SLA) violations and inefficient use of hardware resources. We introduce a system for controlling the complexity of scaling applications composed of multiple services using mechanisms based on fulfillment of SLAs. We present how service monitoring information can be used in conjunction with service level objectives, predictions, and correlations between performance indicators for optimizing the allocation of services belonging to distributed applications. We validate our models using experiments and simulations involving a distributed enterprise information system. We show how discovering correlations between application performance indicators can be used as a basis for creating refined service level objectives, which can then be used for scaling the application and improving the overall application's performance under similar conditions.
Resumo:
Seasonal snow cover is of great environmental and socio-economic importance for the European Alps. Therefore a high priority has been assigned to quantifying its temporal and spatial variability. Complementary to land-based monitoring networks, optical satellite observations can be used to derive spatially comprehensive information on snow cover extent. For understanding long-term changes in alpine snow cover extent, the data acquired by the Advanced Very High Resolution Radiometer (AVHRR) sensors mounted onboard the National Oceanic and Atmospheric Association (NOAA) and Meteorological Operational satellite (MetOp) platforms offer a unique source of information. In this paper, we present the first space-borne 1 km snow extent climatology for the Alpine region derived from AVHRR data over the period 1985–2011. The objective of this study is twofold: first, to generate a new set of cloud-free satellite snow products using a specific cloud gap-filling technique and second, to examine the spatiotemporal distribution of snow cover in the European Alps over the last 27 yr from the satellite perspective. For this purpose, snow parameters such as snow onset day, snow cover duration (SCD), melt-out date and the snow cover area percentage (SCA) were employed to analyze spatiotemporal variability of snow cover over the course of three decades. On the regional scale, significant trends were found toward a shorter SCD at lower elevations in the south-east and south-west. However, our results do not show any significant trends in the monthly mean SCA over the last 27 yr. This is in agreement with other research findings and may indicate a deceleration of the decreasing snow trend in the Alpine region. Furthermore, such data may provide spatially and temporally homogeneous snow information for comprehensive use in related research fields (i.e., hydrologic and economic applications) or can serve as a reference for climate models.
Resumo:
The Internet of Things (IoT) is attracting considerable attention from the universities, industries, citizens and governments for applications, such as healthcare, environmental monitoring and smart buildings. IoT enables network connectivity between smart devices at all times, everywhere, and about everything. In this context, Wireless Sensor Networks (WSNs) play an important role in increasing the ubiquity of networks with smart devices that are low-cost and easy to deploy. However, sensor nodes are restricted in terms of energy, processing and memory. Additionally, low-power radios are very sensitive to noise, interference and multipath distortions. In this context, this article proposes a routing protocol based on Routing by Energy and Link quality (REL) for IoT applications. To increase reliability and energy-efficiency, REL selects routes on the basis of a proposed end-to-end link quality estimator mechanism, residual energy and hop count. Furthermore, REL proposes an event-driven mechanism to provide load balancing and avoid the premature energy depletion of nodes/networks. Performance evaluations were carried out using simulation and testbed experiments to show the impact and benefits of REL in small and large-scale networks. The results show that REL increases the network lifetime and services availability, as well as the quality of service of IoT applications. It also provides an even distribution of scarce network resources and reduces the packet loss rate, compared with the performance of well-known protocols.
Resumo:
Tropical wetlands are estimated to represent about 50% of the natural wetland methane (CH4) emissions and explain a large fraction of the observed CH4 variability on timescales ranging from glacial–interglacial cycles to the currently observed year-to-year variability. Despite their importance, however, tropical wetlands are poorly represented in global models aiming to predict global CH4 emissions. This publication documents a first step in the development of a process-based model of CH4 emissions from tropical floodplains for global applications. For this purpose, the LPX-Bern Dynamic Global Vegetation Model (LPX hereafter) was slightly modified to represent floodplain hydrology, vegetation and associated CH4 emissions. The extent of tropical floodplains was prescribed using output from the spatially explicit hydrology model PCR-GLOBWB. We introduced new plant functional types (PFTs) that explicitly represent floodplain vegetation. The PFT parameterizations were evaluated against available remote-sensing data sets (GLC2000 land cover and MODIS Net Primary Productivity). Simulated CH4 flux densities were evaluated against field observations and regional flux inventories. Simulated CH4 emissions at Amazon Basin scale were compared to model simulations performed in the WETCHIMP intercomparison project. We found that LPX reproduces the average magnitude of observed net CH4 flux densities for the Amazon Basin. However, the model does not reproduce the variability between sites or between years within a site. Unfortunately, site information is too limited to attest or disprove some model features. At the Amazon Basin scale, our results underline the large uncertainty in the magnitude of wetland CH4 emissions. Sensitivity analyses gave insights into the main drivers of floodplain CH4 emission and their associated uncertainties. In particular, uncertainties in floodplain extent (i.e., difference between GLC2000 and PCR-GLOBWB output) modulate the simulated emissions by a factor of about 2. Our best estimates, using PCR-GLOBWB in combination with GLC2000, lead to simulated Amazon-integrated emissions of 44.4 ± 4.8 Tg yr−1. Additionally, the LPX emissions are highly sensitive to vegetation distribution. Two simulations with the same mean PFT cover, but different spatial distributions of grasslands within the basin, modulated emissions by about 20%. Correcting the LPX-simulated NPP using MODIS reduces the Amazon emissions by 11.3%. Finally, due to an intrinsic limitation of LPX to account for seasonality in floodplain extent, the model failed to reproduce the full dynamics in CH4 emissions but we proposed solutions to this issue. The interannual variability (IAV) of the emissions increases by 90% if the IAV in floodplain extent is accounted for, but still remains lower than in most of the WETCHIMP models. While our model includes more mechanisms specific to tropical floodplains, we were unable to reduce the uncertainty in the magnitude of wetland CH4 emissions of the Amazon Basin. Our results helped identify and prioritize directions towards more accurate estimates of tropical CH4 emissions, and they stress the need for more research to constrain floodplain CH4 emissions and their temporal variability, even before including other fundamental mechanisms such as floating macrophytes or lateral water fluxes.
Resumo:
Here we report the first study on the electrochemical energy storage application of a surface-immobilized ruthenium complex multilayer thin film with anion storage capability. We employed a novel dinuclear ruthenium complex with tetrapodal anchoring groups to build well-ordered redox-active multilayer coatings on an indium tin oxide (ITO) surface using a layer-by-layer self-assembly process. Cyclic voltammetry (CV), UV-Visible (UV-Vis) and Raman spectroscopy showed a linear increase of peak current, absorbance and Raman intensities, respectively with the number of layers. These results indicate the formation of well-ordered multilayers of the ruthenium complex on ITO, which is further supported by the X-ray photoelectron spectroscopy analysis. The thickness of the layers can be controlled with nanometer precision. In particular, the thickest layer studied (65 molecular layers and approx. 120 nm thick) demonstrated fast electrochemical oxidation/reduction, indicating a very low attenuation of the charge transfer within the multilayer. In situ-UV-Vis and resonance Raman spectroscopy results demonstrated the reversible electrochromic/redox behavior of the ruthenium complex multilayered films on ITO with respect to the electrode potential, which is an ideal prerequisite for e.g. smart electrochemical energy storage applications. Galvanostatic charge–discharge experiments demonstrated a pseudocapacitor behavior of the multilayer film with a good specific capacitance of 92.2 F g−1 at a current density of 10 μA cm−2 and an excellent cycling stability. As demonstrated in our prototypical experiments, the fine control of physicochemical properties at nanometer scale, relatively good stability of layers under ambient conditions makes the multilayer coatings of this type an excellent material for e.g. electrochemical energy storage, as interlayers in inverted bulk heterojunction solar cell applications and as functional components in molecular electronics applications.