916 resultados para Application performance monitoring.
Resumo:
Approximately half of the houses in Northern Ireland were built before any form of minimum thermal specification or energy efficiency standard was enforced. Furthermore, 44% of households are categorised as being in fuel poverty; spending more than 10% of the household income to heat the house to bring it to an acceptable level of thermal comfort. To bring existing housing stock up to an acceptable standard, retrofitting for improving the energy efficiency is essential and it is also necessary to study the effectiveness of such improvements in future climate scenarios. This paper presents the results from a year-long performance monitoring of two houses that have undergone retrofits to improve energy efficiency. Using wireless sensor technology internal temperature, humidity, external weather, household gas and electricity usage were monitored for a year. Simulations using IES-VE dynamic building modelling software were calibrated using the monitoring data to ASHARE Guideline 14 standards. The energy performance and the internal environment of the houses were then assessed for current and future climate scenarios and the results show that there is a need for a holistic balanced strategy for retrofitting.
Resumo:
Live migration of multiple Virtual Machines (VMs) has become an indispensible management activity in datacenters for application performance, load balancing, server consolidation. While state-of-the-art live VM migration strategies focus on the improvement of the migration performance of a single VM, little attention has been given to the case of multiple VMs migration. Moreover, existing works on live VM migration ignore the inter-VM dependencies, and underlying network topology and its bandwidth. Different sequences of migration and different allocations of bandwidth result in different total migration times and total migration downtimes. This paper concentrates on developing a multiple VMs migration scheduling algorithm such that the performance of migration is maximized. We evaluate our proposed algorithm through simulation. The simulation results show that our proposed algorithm can migrate multiple VMs on any datacenter with minimum total migration time and total migration downtime.
Resumo:
Network topology and routing are two important factors in determining the communication costs of big data applications at large scale. As for a given Cluster, Cloud, or Grid system, the network topology is fixed and static or dynamic routing protocols are preinstalled to direct the network traffic. Users cannot change them once the system is deployed. Hence, it is hard for application developers to identify the optimal network topology and routing algorithm for their applications with distinct communication patterns. In this study, we design a CCG virtual system (CCGVS), which first uses container-based virtualization to allow users to create a farm of lightweight virtual machines on a single host. Then, it uses software-defined networking (SDN) technique to control the network traffic among these virtual machines. Users can change the network topology and control the network traffic programmingly, thereby enabling application developers to evaluate their applications on the same system with different network topologies and routing algorithms. The preliminary experimental results through both synthetic big data programs and NPB benchmarks have shown that CCGVS can represent application performance variations caused by network topology and routing algorithm.
Resumo:
Lateral displacement and global stability are the two main stability criteria for soil nail walls. Conventional design methods do not adequately address the deformation behaviour of soil nail walls, owing to the complexity involved in handling a large number of influencing factors. Consequently, limited methods of deformation estimates based on empirical relationships and in situ performance monitoring are available in the literature. It is therefore desirable that numerical techniques and statistical methods are used in order to gain a better insight into the deformation behaviour of soil nail walls. In the present study numerical experiments are conducted using a 2 4 factorial design method. Based on analysis of the maximum lateral deformation and factor-of-safety observations from the numerical experiments, regression models for maximum lateral deformation and factor-of-safety prediction are developed and checked for adequacy. Selection of suitable design factors for the 2 4 factorial design of numerical experiments enabled the use of the proposed regression models over a practical range of soil nail wall heights and in situ soil variability. It is evident from the model adequacy analyses and illustrative example that the proposed regression models provided a reasonably good estimate of the lateral deformation and global factor of safety of the soil nail walls.
Resumo:
We study wireless multihop energy harvesting sensor networks employed for random field estimation. The sensors sense the random field and generate data that is to be sent to a fusion node for estimation. Each sensor has an energy harvesting source and can operate in two modes: Wake and Sleep. We consider the problem of obtaining jointly optimal power control, routing and scheduling policies that ensure a fair utilization of network resources. This problem has a high computational complexity. Therefore, we develop a computationally efficient suboptimal approach to obtain good solutions to this problem. We study the optimal solution and performance of the suboptimal approach through some numerical examples.
Resumo:
A new feature-based technique is introduced to solve the nonlinear forward problem (FP) of the electrical capacitance tomography with the target application of monitoring the metal fill profile in the lost foam casting process. The new technique is based on combining a linear solution to the FP and a correction factor (CF). The CF is estimated using an artificial neural network (ANN) trained using key features extracted from the metal distribution. The CF adjusts the linear solution of the FP to account for the nonlinear effects caused by the shielding effects of the metal. This approach shows promising results and avoids the curse of dimensionality through the use of features and not the actual metal distribution to train the ANN. The ANN is trained using nine features extracted from the metal distributions as input. The expected sensors readings are generated using ANSYS software. The performance of the ANN for the training and testing data was satisfactory, with an average root-mean-square error equal to 2.2%.
Resumo:
Software transactional memory (STM) has been proposed as a promising programming paradigm for shared memory multi-threaded programs as an alternative to conventional lock based synchronization primitives. Typical STM implementations employ a conflict detection scheme, which works with uniform access granularity, tracking shared data accesses either at word/cache line or at object level. It is well known that a single fixed access tracking granularity cannot meet the conflicting goals of reducing false conflicts without impacting concurrency adversely. A fine grained granularity while improving concurrency can have an adverse impact on performance due to lock aliasing, lock validation overheads, and additional cache pressure. On the other hand, a coarse grained granularity can impact performance due to reduced concurrency. Thus, in general, a fixed or uniform granularity access tracking (UGAT) scheme is application-unaware and rarely matches the access patterns of individual application or parts of an application, leading to sub-optimal performance for different parts of the application(s). In order to mitigate the disadvantages associated with UGAT scheme, we propose a Variable Granularity Access Tracking (VGAT) scheme in this paper. We propose a compiler based approach wherein the compiler uses inter-procedural whole program static analysis to select the access tracking granularity for different shared data structures of the application based on the application's data access pattern. We describe our prototype VGAT scheme, using TL2 as our STM implementation. Our experimental results reveal that VGAT-STM scheme can improve the application performance of STAMP benchmarks from 1.87% to up to 21.2%.
Resumo:
Exascale systems of the future are predicted to have mean time between failures (MTBF) of less than one hour. Malleable applications, where the number of processors on which the applications execute can be changed during executions, can make use of their malleability to better tolerate high failure rates. We present AdFT, an adaptive fault tolerance framework for long running malleable applications to maximize application performance in the presence of failures. AdFT framework includes cost models for evaluating the benefits of various fault tolerance actions including checkpointing, live-migration and rescheduling, and runtime decisions for dynamically selecting the fault tolerance actions at different points of application execution to maximize performance. Simulations with real and synthetic failure traces show that our approach outperforms existing fault tolerance mechanisms for malleable applications yielding up to 23% improvement in application performance, and is effective even for petascale systems and beyond.
Resumo:
首先阐述了光纤布拉格光栅(FBG)传感系统的传感原理和基本组成。然后介绍了我们研制的FBG温度和应变传感器的特性以及关于它们应用于平台监测中的可靠性问题的研究结果,包括疲劳损伤、防水密封和机械防护。最后讨论了FBG传感系统在海洋工程中的其它潜在用途。Sensing mechanism of Fiber Bragg gratings (FBGs) and its sensing system are presented. Then, the performance of the FBG strain and temperature sensors we designed is described. Some results related to the reliability of the FBG sensor in the application to monitoring offshore platforms is also given, which involves fatigue damage, waterproof packing and mechanical protection. Finally, other potential applications of FBGs in offshore engineering are discussed.
Resumo:
Heart disease is one of the main factor causing death in the developed countries. Over several decades, variety of electronic and computer technology have been developed to assist clinical practices for cardiac performance monitoring and heart disease diagnosis. Among these methods, Ballistocardiography (BCG) has an interesting feature that no electrodes are needed to be attached to the body during the measurement. Thus, it is provides a potential application to asses the patients heart condition in the home. In this paper, a comparison is made for two neural networks based BCG signal classification models. One system uses a principal component analysis (PCA) method, and the other a discrete wavelet transform, to reduce the input dimensionality. It is indicated that the combined wavelet transform and neural network has a more reliable performance than the combined PCA and neural network system. Moreover, the wavelet transform requires no prior knowledge of the statistical distribution of data samples and the computation complexity and training time are reduced.
Resumo:
Embedded wireless sensor network (WSN) systems have been developed and used in a wide variety of applications such as local automatic environmental monitoring; medical applications analysing aspects of fitness and health energy metering and management in the built environment as well as traffic pattern analysis and control applications. While the purpose and functions of embedded wireless sensor networks have a myriad of applications and possibilities in the future, a particular implementation of these ambient sensors is in the area of wearable electronics incorporated into body area networks and everyday garments. Some of these systems will incorporate inertial sensing devices and other physical and physiological sensors with a particular focus on the application areas of athlete performance monitoring and e-health. Some of the important physical requirements for wearable antennas are that they are light-weight, small and robust and should also use materials that are compatible with a standard manufacturing process such as flexible polyimide or fr4 material where low cost consumer market oriented products are being produced. The substrate material is required to be low loss and flexible and often necessitates the use of thin dielectric and metallization layers. This paper describes the development of such a wearable, flexible antenna system for ISM band wearable wireless sensor networks. The material selected for the development of the wearable system in question is DE104i characterized by a dielectric constant of 3.8 and a loss tangent of 0.02. The antenna feed line is a 50 Ohm microstrip topology suitable for use with standard, high-performance and low-cost SMA-type RF connector technologies, widely used for these types of applications. The desired centre frequency is aimed at the 2.4GHz ISM band to be compatible with IEEE 802.15.4 Zigbee communication protocols and the Bluetooth standard which operate in this band.
Resumo:
This paper discusses load-balancing issues when using heterogeneous cluster computers. There is a growing trend towards the use of commodity microprocessor clusters. Although today's microprocessors have reached a theoretical peak performance in the range of one GFLOPS/s, heterogeneous clusters of commodity processors are amongst the most challenging parallel systems to programme efficiently. We will outline an approach for optimising the performance of parallel mesh-based applications for heterogeneous cluster computers and present case studies with the GeoFEM code. The focus is on application cost monitoring and load balancing using the DRAMA library.
Resumo:
This paper details the monitoring and repair of an impact damaged prestressed concrete bridge. The repair was required following an impact from a low-loader carrying an excavator while passing underneath the bridge. The repair was carried out by preloading the bridge in the vicinity of the damage to relieve some prestressing. This preload was removed following the hardening and considerable strength gain of the repair material. The true behaviour of damaged prestressed concrete bridges during repair is difficult to estimate theoretically due to lack of benchmarking and inadequacy of assumed damage models. A network of strain gauges at locations of interest was thus installed during the entire period of repair. Effects of various activities were qualitatively and quantitatively observed. The interaction and rapid, model-free calibration of damaged and undamaged beams, including identification of damaged gauges were also probed. This full scale experiment is expected to be of interest and benefit to the practising engineer and the researcher alike.
Resumo:
In this work several techniques to monitor the performance of optical networks were developed. These techniques are dedicated either to the measurement of the data signal parameters (optical signal to noise ratio and dispersion) or to the detection of physical failures on the network infrastructure. The optical signal to noise ratio of the transmitted signal was successfully monitored using methods based on the presence of Bragg gratings imprinted on high birefringent fibres that allowed the distinction of the signal from the noise due to its polarization properties. The monitoring of the signal group-velocity dispersion was also possible. In this case, a method based on the analysis of the electric spectrum of the signal was applied. It was experimentally demonstrated that this technique is applicable on both amplitude and phase modulated signals. It was also developed a technique to monitor the physical infrastructure of an optical access network. Once again, the application of Bragg gratings (this time imprinted on standard single mode fibres) was the basis of the developed method.
Resumo:
Optical networks are under constant evolution. The growing demand for dynamism require devices that can accommodate different types of traffic. Thus the study of transparent optical networks arises. This approach makes optical networks more "elegant" , due to a more efficient use of network resources. In this thesis, the author proposes devices that intend to form alternative approaches both in the state of art of these same technologies both in the fitting of this technologies in transparent optical networks. Given that full transparency is difficult to achieve with current technology (perhaps with more developed optical computing this is possible), the author proposes techniques with different levels of transparency. On the topic of performance of optical networks, the author proposes two techniques for monitoring chromatic dispersion with different levels of transparency. In Chapter 3 the proposed technique seems to make more sense for long-haul optical transmission links and high transmission rates, not only due to its moderate complexity but also to its potential moderate/high cost. However it is proposed to several modulation formats, particularly those that have a protruding clock component. In Chapter 4 the transparency level was not tested for various modulation formats, however some transparency is achieved by not adding any electrical device after the receiver (other than an analog-digital converter). This allows that this technique can operate at high transmission rates in excess of 100 Gbit / s, if electro-optical asynchronous sampling is used before the optical receiver. Thus a low cost and low bandwidth photo-detector can be used. In chapter 5 is demonstrated a technique for simultaneously monitoring multiple impairments of the optical network by generating novel performance analysis diagrams and by use of artificial neural networks. In chapter 6 the author demonstrates an all-optical technique for controlling the optical state of polarization and an example of how all-optical signal processing can fully cooperate with optical performance monitoring.