800 resultados para cloud computing datacenter performance QoS
Resumo:
In this paper we evaluate and compare two representativeand popular distributed processing engines for large scalebig data analytics, Spark and graph based engine GraphLab. Wedesign a benchmark suite including representative algorithmsand datasets to compare the performances of the computingengines, from performance aspects of running time, memory andCPU usage, network and I/O overhead. The benchmark suite istested on both local computer cluster and virtual machines oncloud. By varying the number of computers and memory weexamine the scalability of the computing engines with increasingcomputing resources (such as CPU and memory). We also runcross-evaluation of generic and graph based analytic algorithmsover graph processing and generic platforms to identify thepotential performance degradation if only one processing engineis available. It is observed that both computing engines showgood scalability with increase of computing resources. WhileGraphLab largely outperforms Spark for graph algorithms, ithas close running time performance as Spark for non-graphalgorithms. Additionally the running time with Spark for graphalgorithms over cloud virtual machines is observed to increaseby almost 100% compared to over local computer clusters.
Resumo:
Virtual machines (VMs) are powerful platforms for building agile datacenters and emerging cloud systems. However, resource management for a VM-based system is still a challenging task. First, the complexity of application workloads as well as the interference among competing workloads makes it difficult to understand their VMs’ resource demands for meeting their Quality of Service (QoS) targets; Second, the dynamics in the applications and system makes it also difficult to maintain the desired QoS target while the environment changes; Third, the transparency of virtualization presents a hurdle for guest-layer application and host-layer VM scheduler to cooperate and improve application QoS and system efficiency. This dissertation proposes to address the above challenges through fuzzy modeling and control theory based VM resource management. First, a fuzzy-logic-based nonlinear modeling approach is proposed to accurately capture a VM’s complex demands of multiple types of resources automatically online based on the observed workload and resource usages. Second, to enable fast adaption for resource management, the fuzzy modeling approach is integrated with a predictive-control-based controller to form a new Fuzzy Modeling Predictive Control (FMPC) approach which can quickly track the applications’ QoS targets and optimize the resource allocations under dynamic changes in the system. Finally, to address the limitations of black-box-based resource management solutions, a cross-layer optimization approach is proposed to enable cooperation between a VM’s host and guest layers and further improve the application QoS and resource usage efficiency. The above proposed approaches are prototyped and evaluated on a Xen-based virtualized system and evaluated with representative benchmarks including TPC-H, RUBiS, and TerraFly. The results demonstrate that the fuzzy-modeling-based approach improves the accuracy in resource prediction by up to 31.4% compared to conventional regression approaches. The FMPC approach substantially outperforms the traditional linear-model-based predictive control approach in meeting application QoS targets for an oversubscribed system. It is able to manage dynamic VM resource allocations and migrations for over 100 concurrent VMs across multiple hosts with good efficiency. Finally, the cross-layer optimization approach further improves the performance of a virtualized application by up to 40% when the resources are contended by dynamic workloads.
Resumo:
This paper presents the Accurate Google Cloud Simulator (AGOCS) – a novel high-fidelity Cloud workload simulator based on parsing real workload traces, which can be conveniently used on a desktop machine for day-to-day research. Our simulation is based on real-world workload traces from a Google Cluster with 12.5K nodes, over a period of a calendar month. The framework is able to reveal very precise and detailed parameters of the executed jobs, tasks and nodes as well as to provide actual resource usage statistics. The system has been implemented in Scala language with focus on parallel execution and an easy-to-extend design concept. The paper presents the detailed structural framework for AGOCS and discusses our main design decisions, whilst also suggesting alternative and possibly performance enhancing future approaches. The framework is available via the Open Source GitHub repository.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
In this thesis, tool support is addressed for the combined disciplines of Model-based testing and performance testing. Model-based testing (MBT) utilizes abstract behavioral models to automate test generation, thus decreasing time and cost of test creation. MBT is a functional testing technique, thereby focusing on output, behavior, and functionality. Performance testing, however, is non-functional and is concerned with responsiveness and stability under various load conditions. MBPeT (Model-Based Performance evaluation Tool) is one such tool which utilizes probabilistic models, representing dynamic real-world user behavior patterns, to generate synthetic workload against a System Under Test and in turn carry out performance analysis based on key performance indicators (KPI). Developed at Åbo Akademi University, the MBPeT tool is currently comprised of a downloadable command-line based tool as well as a graphical user interface. The goal of this thesis project is two-fold: 1) to extend the existing MBPeT tool by deploying it as a web-based application, thereby removing the requirement of local installation, and 2) to design a user interface for this web application which will add new user interaction paradigms to the existing feature set of the tool. All phases of the MBPeT process will be realized via this single web deployment location including probabilistic model creation, test configurations, test session execution against a SUT with real-time monitoring of user configurable metric, and final test report generation and display. This web application (MBPeT Dashboard) is implemented with the Java programming language on top of the Vaadin framework for rich internet application development. The Vaadin framework handles the complicated web communications processes and front-end technologies, freeing developers to implement the business logic as well as the user interface in pure Java. A number of experiments are run in a case study environment to validate the functionality of the newly developed Dashboard application as well as the scalability of the solution implemented in handling multiple concurrent users. The results support a successful solution with regards to the functional and performance criteria defined, while improvements and optimizations are suggested to increase both of these factors.
Resumo:
Current trends in broadband mobile networks are addressed towards the placement of different capabilities at the edge of the mobile network in a centralised way. On one hand, the split of the eNB between baseband processing units and remote radio headers makes it possible to process some of the protocols in centralised premises, likely with virtualised resources. On the other hand, mobile edge computing makes use of processing and storage capabilities close to the air interface in order to deploy optimised services with minimum delay. The confluence of both trends is a hot topic in the definition of future 5G networks. The full centralisation of both technologies in cloud data centres imposes stringent requirements to the fronthaul connections in terms of throughput and latency. Therefore, all those cells with limited network access would not be able to offer these types of services. This paper proposes a solution for these cases, based on the placement of processing and storage capabilities close to the remote units, which is especially well suited for the deployment of clusters of small cells. The proposed cloud-enabled small cells include a highly efficient microserver with a limited set of virtualised resources offered to the cluster of small cells. As a result, a light data centre is created and commonly used for deploying centralised eNB and mobile edge computing functionalities. The paper covers the proposed architecture, with special focus on the integration of both aspects, and possible scenarios of application.
Resumo:
Executing a cloud or aerosol physical properties retrieval algorithm from controlled synthetic data is an important step in retrieval algorithm development. Synthetic data can help answer questions about the sensitivity and performance of the algorithm or aid in determining how an existing retrieval algorithm may perform with a planned sensor. Synthetic data can also help in solving issues that may have surfaced in the retrieval results. Synthetic data become very important when other validation methods, such as field campaigns,are of limited scope. These tend to be of relatively short duration and often are costly. Ground stations have limited spatial coverage whilesynthetic data can cover large spatial and temporal scales and a wide variety of conditions at a low cost. In this work I develop an advanced cloud and aerosol retrieval simulator for the MODIS instrument, also known as Multi-sensor Cloud and Aerosol Retrieval Simulator (MCARS). In a close collaboration with the modeling community I have seamlessly combined the GEOS-5 global climate model with the DISORT radiative transfer code, widely used by the remote sensing community, with the observations from the MODIS instrument to create the simulator. With the MCARS simulator it was then possible to solve the long standing issue with the MODIS aerosol optical depth retrievals that had a low bias for smoke aerosols. MODIS aerosol retrieval did not account for effects of humidity on smoke aerosols. The MCARS simulator also revealed an issue that has not been recognized previously, namely,the value of fine mode fraction could create a linear dependence between retrieved aerosol optical depth and land surface reflectance. MCARS provided the ability to examine aerosol retrievals against “ground truth” for hundreds of thousands of simultaneous samples for an area covered by only three AERONET ground stations. Findings from MCARS are already being used to improve the performance of operational MODIS aerosol properties retrieval algorithms. The modeling community will use the MCARS data to create new parameterizations for aerosol properties as a function of properties of the atmospheric column and gain the ability to correct any assimilated retrieval data that may display similar dependencies in comparisons with ground measurements.
Resumo:
Embedding intelligence in extreme edge devices allows distilling raw data acquired from sensors into actionable information, directly on IoT end-nodes. This computing paradigm, in which end-nodes no longer depend entirely on the Cloud, offers undeniable benefits, driving a large research area (TinyML) to deploy leading Machine Learning (ML) algorithms on micro-controller class of devices. To fit the limited memory storage capability of these tiny platforms, full-precision Deep Neural Networks (DNNs) are compressed by representing their data down to byte and sub-byte formats, in the integer domain. However, the current generation of micro-controller systems can barely cope with the computing requirements of QNNs. This thesis tackles the challenge from many perspectives, presenting solutions both at software and hardware levels, exploiting parallelism, heterogeneity and software programmability to guarantee high flexibility and high energy-performance proportionality. The first contribution, PULP-NN, is an optimized software computing library for QNN inference on parallel ultra-low-power (PULP) clusters of RISC-V processors, showing one order of magnitude improvements in performance and energy efficiency, compared to current State-of-the-Art (SoA) STM32 micro-controller systems (MCUs) based on ARM Cortex-M cores. The second contribution is XpulpNN, a set of RISC-V domain specific instruction set architecture (ISA) extensions to deal with sub-byte integer arithmetic computation. The solution, including the ISA extensions and the micro-architecture to support them, achieves energy efficiency comparable with dedicated DNN accelerators and surpasses the efficiency of SoA ARM Cortex-M based MCUs, such as the low-end STM32M4 and the high-end STM32H7 devices, by up to three orders of magnitude. To overcome the Von Neumann bottleneck while guaranteeing the highest flexibility, the final contribution integrates an Analog In-Memory Computing accelerator into the PULP cluster, creating a fully programmable heterogeneous fabric that demonstrates end-to-end inference capabilities of SoA MobileNetV2 models, showing two orders of magnitude performance improvements over current SoA analog/digital solutions.
Resumo:
Nell'ambito della loro trasformazione digitale, molte organizzazioni stanno adottando nuove tecnologie per supportare lo sviluppo, l'implementazione e la gestione delle proprie architetture basate su microservizi negli ambienti cloud e tra i fornitori di cloud. In questo scenario, le service ed event mesh stanno emergendo come livelli infrastrutturali dinamici e configurabili che facilitano interazioni complesse e la gestione di applicazioni basate su microservizi e servizi cloud. L’obiettivo di questo lavoro è quello di analizzare soluzioni mesh open-source (istio, Linkerd, Apache EventMesh) dal punto di vista delle prestazioni, quando usate per gestire la comunicazione tra applicazioni a workflow basate su microservizi all’interno dell’ambiente cloud. A questo scopo è stato realizzato un sistema per eseguire il dislocamento di ognuno dei componenti all’interno di un cluster singolo e in un ambiente multi-cluster. La raccolta delle metriche e la loro sintesi è stata realizzata con un sistema personalizzato, compatibile con il formato dei dati di Prometheus. I test ci hanno permesso di valutare le prestazioni di ogni componente insieme alla sua efficacia. In generale, mentre si è potuta accertare la maturità delle implementazioni di service mesh testate, la soluzione di event mesh da noi usata è apparsa come una tecnologia ancora non matura, a causa di numerosi problemi di funzionamento.
Resumo:
Trees from tropical montane cloud forest (TMCF) display very dynamic patterns of water use. They are capable of downwards water transport towards the soil during leaf-wetting events, likely a consequence of foliar water uptake (FWU), as well as high rates of night-time transpiration (Enight) during drier nights. These two processes might represent important sources of water losses and gains to the plant, but little is known about the environmental factors controlling these water fluxes. We evaluated how contrasting atmospheric and soil water conditions control diurnal, nocturnal and seasonal dynamics of sap flow in Drimys brasiliensis (Miers), a common Neotropical cloud forest species. We monitored the seasonal variation of soil water content, micrometeorological conditions and sap flow of D. brasiliensis trees in the field during wet and dry seasons. We also conducted a greenhouse experiment exposing D. brasiliensis saplings under contrasting soil water conditions to deuterium-labelled fog water. We found that during the night D. brasiliensis possesses heightened stomatal sensitivity to soil drought and vapour pressure deficit, which reduces night-time water loss. Leaf-wetting events had a strong suppressive effect on tree transpiration (E). Foliar water uptake increased in magnitude with drier soil and during longer leaf-wetting events. The difference between diurnal and nocturnal stomatal behaviour in D. brasiliensis could be attributed to an optimization of carbon gain when leaves are dry, as well as minimization of nocturnal water loss. The leaf-wetting events on the other hand seem important to D. brasiliensis water balance, especially during soil droughts, both by suppressing tree transpiration (E) and as a small additional water supply through FWU. Our results suggest that decreases in leaf-wetting events in TMCF might increase D. brasiliensis water loss and decrease its water gains, which could compromise its ecophysiological performance and survival during dry periods.
Resumo:
BACKGROUND: Aqueous two-phase micellar systems (ATPMS) are micellar surfactant solutions with physical properties that make them very efficient for the extraction/concentration of biological products. In this work the main proposal that has been discussed is the possible applicability and importance of a novel oscillatory flow micro-reactor (micro-OFR) envisaged for parallel screening and/or development of industrial bioprocesses in ATPMS. Based on the technology of oscillatory flow mixing (OFM), this batch or continuous micro-reactor has been presented as a new small-scale alternative for biological or physical-chemical applications. RESULTS: ATPMS experiments were carried out in different OFM conditions (times, temperatures, oscillation frequencies and amplitudes) for the extraction of glucose-6-phosphate dehydrogenase (G6PD) in Triton X-114/buffer with Cibacron Blue as affinity ligand. CONCLUSION: The results suggest the potential use of OFR, considering this process a promising and new alternative for the purification or pre-concentration of bioproducts. Despite the applied homogenization and extraction conditions have presented no improvements in the partitioning selectivity of the target enzyme, when at rest temperature they have influenced the partitioning behavior in Triton X-114 ATPMS. (C) 2011 Society of Chemical Industry
Resumo:
We investigate in detail the effects of a QND vibrational number measurement made on single ions in a recently proposed measurement scheme for the vibrational state of a register of ions in a linear rf trap [C. D'HELON and G. J. MILBURN, Phys Rev. A 54, 5141 (1996)]. The performance of a measurement shows some interesting patterns which are closely related to searching.
Resumo:
A new cloud-point extraction and preconcentration method, using a cationic, surfactant, Aliquat-336 (tricaprylyl-methy;ammonium chloride), his-been developed for the determination of cyanobacterial toxins, microcystins, in natural waters. Sodium sulfate was used to induce phase separation at 25 degreesC. The phase behavior of Aliquat-336 with respect to concentration of Na2SO4 was studied. The cloud-point system revealed a very high phase volume ratio compared to other established systems of nonionic, anionic, and cationic surfactants: At pH 6-7, it showed an outstanding selectivity in ahalyte extraction for anionic species. Only MC-LR and MC-YR, which are known to be predominantly anionic, were extracted (with averaged recoveries of 113.9 +/- 9% and 87.1 +/- 7%, respectively). MC-RR, which is likely to be amphoteric at the above pH range, was. not cle tectable in.the extract. Coupled to HPLC/UV separation and detection, the cloud-point extraction method (with 2.5 mM Aliquat-336 and 75 mM Na2SO4 at 25 degreesC) offered detection limits of 150 +/- 7 and 470 +/- 72 pg/mL for MC-LR and MC-YR, respectively, in 25 mL of deionized water. Repeatability of the method was 7.6% for MC-LR and 7.3% for MC-YR: The cloud-point extraction process can be. completed within 10-15 min with no cleanup steps required. Applicability of the new method to the determination of microcystins in real samples was demonstrated using natural surface waters, collected from a local river and a local duck pond spiked with realistic. concentrations of microcystins. Effects of salinity and organic matter (TOC) content in the water sample on the extraction efficiency were also studied.
Resumo:
As end-user computing becomes more pervasive, an organization's success increasingly depends on the ability of end-users, usually in managerial positions, to extract appropriate data from both internal and external sources. Many of these data sources include or are derived from the organization's accounting information systems. Managerial end-users with different personal characteristics and approaches are likely to compose queries of differing levels of accuracy when searching the data contained within these accounting information systems. This research investigates how cognitive style elements of personality influence managerial end-user performance in database querying tasks. A laboratory experiment was conducted in which participants generated queries to retrieve information from an accounting information system to satisfy typical information requirements. The experiment investigated the influence of personality on the accuracy of queries of varying degrees of complexity. Relying on the Myers–Briggs personality instrument, results show that perceiving individuals (as opposed to judging individuals) who rely on intuition (as opposed to sensing) composed queries more accurately. As expected, query complexity and academic performance also explain the success of data extraction tasks.
Resumo:
This paper seeks to investigate the effectiveness of sea-defense structures in preventing/reducing the tsunami overtopping as well as evaluating the resulting tsunami impact at El Jadida, Morocco. Different tsunami wave conditions are generated by considering various earthquake scenarios of magnitudes ranging from M-w = 8.0 to M-w = 8.6. These scenarios represent the main active earthquake faults in the SW Iberia margin and are consistent with two past events that generated tsunamis along the Atlantic coast of Morocco. The behavior of incident tsunami waves when interacting with coastal infrastructures is analyzed on the basis of numerical simulations of near-shore tsunami waves' propagation. Tsunami impact at the affected site is assessed through computing inundation and current velocity using a high-resolution digital terrain model that incorporates bathymetric, topographic and coastal structures data. Results, in terms of near-shore tsunami propagation snapshots, waves' interaction with coastal barriers, and spatial distributions of flow depths and speeds, are presented and discussed in light of what was observed during the 2011 Tohoku-oki tsunami. Predicted results show different levels of impact that different tsunami wave conditions could generate in the region. Existing coastal barriers around the El Jadida harbour succeeded in reflecting relatively small waves generated by some scenarios, but failed in preventing the overtopping caused by waves from others. Considering the scenario highly impacting the El Jadida coast, significant inundations are computed at the sandy beach and unprotected areas. The modeled dramatic tsunami impact in the region shows the need for additional tsunami standards not only for sea-defense structures but also for the coastal dwellings and houses to provide potential in-place evacuation.