949 resultados para performance degradation
Resumo:
A full assessment of para-virtualization is important, because without knowledge about the various overheads, users can not understand whether using virtualization is a good idea or not. In this paper we are very interested in assessing the overheads of running various benchmarks on bare-‐metal, as well as on para-‐virtualization. The idea is to see what the overheads of para-‐ virtualization are, as well as looking at the overheads of turning on monitoring and logging. The knowledge from assessing various benchmarks on these different systems will help a range of users understand the use of virtualization systems. In this paper we assess the overheads of using Xen, VMware, KVM and Citrix, see Table 1. These different virtualization systems are used extensively by cloud-‐users. We are using various Netlib1 benchmarks, which have been developed by the University of Tennessee at Knoxville (UTK), and Oak Ridge National Laboratory (ORNL). In order to assess these virtualization systems, we run the benchmarks on bare-‐metal, then on the para-‐virtualization, and finally we turn on monitoring and logging. The later is important as users are interested in Service Level Agreements (SLAs) used by the Cloud providers, and the use of logging is a means of assessing the services bought and used from commercial providers. In this paper we assess the virtualization systems on three different systems. We use the Thamesblue supercomputer, the Hactar cluster and IBM JS20 blade server (see Table 2), which are all servers available at the University of Reading. A functional virtualization system is multi-‐layered and is driven by the privileged components. Virtualization systems can host multiple guest operating systems, which run on its own domain, and the system schedules virtual CPUs and memory within each Virtual Machines (VM) to make the best use of the available resources. The guest-‐operating system schedules each application accordingly. You can deploy virtualization as full virtualization or para-‐virtualization. Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system, where the guest operating systems can run. No modifications are needed in the guest OS or application, e.g. the guest OS or application is not aware of the virtualized environment and runs normally. Para-‐virualization requires user modification of the guest operating systems, which runs on the virtual machines, e.g. these guest operating systems are aware that they are running on a virtual machine, and provide near-‐native performance. You can deploy both para-‐virtualization and full virtualization across various virtualized systems. Para-‐virtualization is an OS-‐assisted virtualization; where some modifications are made in the guest operating system to enable better performance. In this kind of virtualization, the guest operating system is aware of the fact that it is running on the virtualized hardware and not on the bare hardware. In para-‐virtualization, the device drivers in the guest operating system coordinate the device drivers of host operating system and reduce the performance overheads. The use of para-‐virtualization [0] is intended to avoid the bottleneck associated with slow hardware interrupts that exist when full virtualization is employed. It has revealed [0] that para-‐ virtualization does not impose significant performance overhead in high performance computing, and this in turn this has implications for the use of cloud computing for hosting HPC applications. The “apparent” improvement in virtualization has led us to formulate the hypothesis that certain classes of HPC applications should be able to execute in a cloud environment, with minimal performance degradation. In order to support this hypothesis, first it is necessary to define exactly what is meant by a “class” of application, and secondly it will be necessary to observe application performance, both within a virtual machine and when executing on bare hardware. A further potential complication is associated with the need for Cloud service providers to support Service Level Agreements (SLA), so that system utilisation can be audited.
Resumo:
Most research on distributed space time block coding (STBC) has so far focused on the case of 2 relay nodes and assumed that the relay nodes are perfectly synchronised at the symbol level. By applying STBC to 3-or 4-relay node systems, this paper shows that imperfect synchronisation causes significant performance degradation to the conventional detector. To this end, we propose a new STBC detection solution based on the principle of parallel interference cancellation (PIC). The PIC detector is moderate in computational complexity but is very effective in suppressing the impact of imperfect synchronisation.
Resumo:
In cooperative communication networks, owing to the nodes' arbitrary geographical locations and individual oscillators, the system is fundamentally asynchronous. This will damage some of the key properties of the space-time codes and can lead to substantial performance degradation. In this paper, we study the design of linear dispersion codes (LDCs) for such asynchronous cooperative communication networks. Firstly, the concept of conventional LDCs is extended to the delay-tolerant version and new design criteria are discussed. Then we propose a new design method to yield delay-tolerant LDCs that reach the optimal Jensen's upper bound on ergodic capacity as well as minimum average pairwise error probability. The proposed design employs stochastic gradient algorithm to approach a local optimum. Moreover, it is improved by using simulated annealing type optimization to increase the likelihood of the global optimum. The proposed method allows for flexible number of nodes, receive antennas, modulated symbols and flexible length of codewords. Simulation results confirm the performance of the newly-proposed delay-tolerant LDCs.
Resumo:
In cooperative communication networks, owing to the nodes' arbitrary geographical locations and individual oscillators, the system is fundamentally asynchronous. Such a timing mismatch may cause rank deficiency of the conventional space-time codes and, thus, performance degradation. One efficient way to overcome such an issue is the delay-tolerant space-time codes (DT-STCs). The existing DT-STCs are designed assuming that the transmitter has no knowledge about the channels. In this paper, we show how the performance of DT-STCs can be improved by utilizing some feedback information. A general framework for designing DT-STC with limited feedback is first proposed, allowing for flexible system parameters such as the number of transmit/receive antennas, modulated symbols, and the length of codewords. Then, a new design method is proposed by combining Lloyd's algorithm and the stochastic gradient-descent algorithm to obtain optimal codebook of STCs, particularly for systems with linear minimum-mean-square-error receiver. Finally, simulation results confirm the performance of the newly designed DT-STCs with limited feedback.
Resumo:
The design of binary morphological operators that are translation-invariant and locally defined by a finite neighborhood window corresponds to the problem of designing Boolean functions. As in any supervised classification problem, morphological operators designed from a training sample also suffer from overfitting. Large neighborhood tends to lead to performance degradation of the designed operator. This work proposes a multilevel design approach to deal with the issue of designing large neighborhood-based operators. The main idea is inspired by stacked generalization (a multilevel classifier design approach) and consists of, at each training level, combining the outcomes of the previous level operators. The final operator is a multilevel operator that ultimately depends on a larger neighborhood than of the individual operators that have been combined. Experimental results show that two-level operators obtained by combining operators designed on subwindows of a large window consistently outperform the single-level operators designed on the full window. They also show that iterating two-level operators is an effective multilevel approach to obtain better results.
Resumo:
Today, third generation networks are consolidated realities, and user expectations on new applications and services are becoming higher and higher. Therefore, new systems and technologies are necessary to move towards the market needs and the user requirements. This has driven the development of fourth generation networks. ”Wireless network for the fourth generation” is the expression used to describe the next step in wireless communications. There is no formal definition for what these fourth generation networks are; however, we can say that the next generation networks will be based on the coexistence of heterogeneous networks, on the integration with the existing radio access network (e.g. GPRS, UMTS, WIFI, ...) and, in particular, on new emerging architectures that are obtaining more and more relevance, as Wireless Ad Hoc and Sensor Networks (WASN). Thanks to their characteristics, fourth generation wireless systems will be able to offer custom-made solutions and applications personalized according to the user requirements; they will offer all types of services at an affordable cost, and solutions characterized by flexibility, scalability and reconfigurability. This PhD’s work has been focused on WASNs, autoconfiguring networks which are not based on a fixed infrastructure, but are characterized by being infrastructure less, where devices have to automatically generate the network in the initial phase, and maintain it through reconfiguration procedures (if nodes’ mobility, or energy drain, etc..., cause disconnections). The main part of the PhD activity has been focused on an analytical study on connectivity models for wireless ad hoc and sensor networks, nevertheless a small part of my work was experimental. Anyway, both the theoretical and experimental activities have had a common aim, related to the performance evaluation of WASNs. Concerning the theoretical analysis, the objective of the connectivity studies has been the evaluation of models for the interference estimation. This is due to the fact that interference is the most important performance degradation cause in WASNs. As a consequence, is very important to find an accurate model that allows its investigation, and I’ve tried to obtain a model the most realistic and general as possible, in particular for the evaluation of the interference coming from bounded interfering areas (i.e. a WiFi hot spot, a wireless covered research laboratory, ...). On the other hand, the experimental activity has led to Throughput and Packet Error Rare measurements on a real IEEE802.15.4 Wireless Sensor Network.
Resumo:
The aim of this thesis is the elucidation of structure-properties relationship of molecular semiconductors for electronic devices. This involves the use of a comprehensive set of simulation techniques, ranging from quantum-mechanical to numerical stochastic methods, and also the development of ad-hoc computational tools. In more detail, the research activity regarded two main topics: the study of electronic properties and structural behaviour of liquid crystalline (LC) materials based on functionalised oligo(p-phenyleneethynylene) (OPE), and the investigation on the electric field effect associated to OFET operation on pentacene thin film stability. In this dissertation, a novel family of substituted OPE liquid crystals with applications in stimuli-responsive materials is presented. In more detail, simulations can not only provide evidence for the characterization of the liquid crystalline phases of different OPEs, but elucidate the role of charge transfer states in donor-acceptor LCs containing an endohedral metallofullerene moiety. Such systems can be regarded as promising candidates for organic photovoltaics. Furthermore, exciton dynamics simulations are performed as a way to obtain additional information about the degree of order in OPE columnar phases. Finally, ab initio and molecular mechanics simulations are used to investigate the influence of an applied electric field on pentacene reactivity and stability. The reaction path of pentacene thermal dimerization in the presence of an external electric field is investigated; the results can be related to the fatigue effect observed in OFETs, that show significant performance degradation even in the absence of external agents. In addition to this, the effect of the gate voltage on a pentacene monolayer are simulated, and the results are then compared to X-ray diffraction measurements performed for the first time on operating OFETs.
Resumo:
Reliable data transfer is one of the most difficult tasks to be accomplished in multihop wireless networks. Traditional transport protocols like TCP face severe performance degradation over multihop networks given the noisy nature of wireless media as well as unstable connectivity conditions in place. The success of TCP in wired networks motivates its extension to wireless networks. A crucial challenge faced by TCP over these networks is how to operate smoothly with the 802.11 wireless MAC protocol which also implements a retransmission mechanism at link level in addition to short RTS/CTS control frames for avoiding collisions. These features render TCP acknowledgments (ACK) transmission quite costly. Data and ACK packets cause similar medium access overheads despite the much smaller size of the ACKs. In this paper, we further evaluate our dynamic adaptive strategy for reducing ACK-induced overhead and consequent collisions. Our approach resembles the sender side's congestion control. The receiver is self-adaptive by delaying more ACKs under nonconstrained channels and less otherwise. This improves not only throughput but also power consumption. Simulation evaluations exhibit significant improvement in several scenarios
Resumo:
ZnO has proven to be a multifunctional material with important nanotechnological applications. ZnO nanostructures can be grown in various forms such as nanowires, nanorods, nanobelts, nanocombs etc. In this work, ZnO nanostructures are grown in a double quartz tube configuration thermal Chemical Vapor Deposition (CVD) system. We focus on functionalized ZnO Nanostructures by controlling their structures and tuning their properties for various applications. The following topics have been investigated: 1. We have fabricated various ZnO nanostructures using a thermal CVD technique. The growth parameters were optimized and studied for different nanostructures. 2. We have studied the application of ZnO nanowires (ZnONWs) for field effect transistors (FETs). Unintentional n-type conductivity was observed in our FETs based on as-grown ZnO NWs. We have then shown for the first time that controlled incorporation of hydrogen into ZnO NWs can introduce p-type characters to the nanowires. We further found that the n-type behaviors remained, leading to the ambipolar behaviors of hydrogen incorporated ZnO NWs. Importantly, the detected p- and n- type behaviors are stable for longer than two years when devices were kept in ambient conditions. All these can be explained by an ab initio model of Zn vacancy-Hydrogen complexes, which can serve as the donor, acceptors, or green photoluminescence quencher, depend on the number of hydrogen atoms involved. 3. Next ZnONWs were tested for electron field emission. We focus on reducing the threshold field (Eth) of field emission from non-aligned ZnO NWs. As encouraged by our results on enhancing the conductivity of ZnO NWs by hydrogen annealing described in Chapter 3, we have studied the effect of hydrogen annealing for improving field emission behavior of our ZnO NWs. We found that optimally annealed ZnO NWs offered much lower threshold electric field and improved emission stability. We also studied field emission from ZnO NWs at moderate vacuum levels. We found that there exists a minimum Eth as we scale the threshold field with pressure. This behavior is explained by referring to Paschen’s law. 4. We have studied the application of ZnO nanostructures for solar energy harvesting. First, as-grown and (CdSe) ZnS QDs decorated ZnO NBs and ZnONWs were tested for photocurrent generation. All these nanostructures offered fast response time to solar radiation. The decoration of QDs decreases the stable current level produced by ZnONWs but increases that generated by NBs. It is possible that NBs offer more stable surfaces for the attachment of QDs. In addition, our results suggests that performance degradation of solar cells made by growing ZnO NWs on ITO is due to the increase in resistance of ITO after the high temperature growth process. Hydrogen annealing also improve the efficiency of the solar cells by decreasing the resistance of ITO. Due to the issues on ITO, we use Ni foil as the growth substrates. Performance of solar cells made by growing ZnO NWs on Ni foils degraded after Hydrogen annealing at both low (300 °C) and high (600 °C) temperatures since annealing passivates native defects in ZnONWs and thus reduce the absorption of visible spectra from our solar simulator. Decoration of QDs improves the efficiency of such solar cells by increasing absorption of light in the visible region. Using a better electrolyte than phosphate buffer solution (PBS) such as KI also improves the solar cell efficiency. 5. Finally, we have attempted p-type doping of ZnO NWs using various growth precursors including phosphorus pentoxide, sodium fluoride, and zinc fluoride. We have also attempted to create p-type carriers via introducing interstitial fluorine by annealing ZnO nanostructures in diluted fluorine gas. In brief, we are unable to reproduce the growth of reported p-type ZnO nanostructures. However; we have identified the window of temperature and duration of post-growth annealing of ZnO NWs in dilute fluorine gas which leads to suppression of native defects. This is the first experimental effort on post-growth annealing of ZnO NWs in dilute fluorine gas although this has been suggested by a recent theory for creating p-type semiconductors. In our experiments the defect band peak due to native defects is found to decrease by annealing at 300 °C for 10 – 30 minutes. One of the major future works will be to determine the type of charge carriers in our annealed ZnONWs.
Resumo:
We describe a system for performing SLA-driven management and orchestration of distributed infrastructures composed of services supporting mobile computing use cases. In particular, we focus on a Follow-Me Cloud scenario in which we consider mobile users accessing cloud-enable services. We combine a SLA-driven approach to infrastructure optimization, with forecast-based performance degradation preventive actions and pattern detection for supporting mobile cloud infrastructure management. We present our system's information model and architecture including the algorithmic support and the proposed scenarios for system evaluation.
Resumo:
Detector uniformity is a fundamental performance characteristic of all modern gamma camera systems, and ensuring a stable, uniform detector response is critical for maintaining clinical images that are free of artifact. For these reasons, the assessment of detector uniformity is one of the most common activities associated with a successful clinical quality assurance program in gamma camera imaging. The evaluation of this parameter, however, is often unclear because it is highly dependent upon acquisition conditions, reviewer expertise, and the application of somewhat arbitrary limits that do not characterize the spatial location of the non-uniformities. Furthermore, as the goal of any robust quality control program is the determination of significant deviations from standard or baseline conditions, clinicians and vendors often neglect the temporal nature of detector degradation (1). This thesis describes the development and testing of new methods for monitoring detector uniformity. These techniques provide more quantitative, sensitive, and specific feedback to the reviewer so that he or she may be better equipped to identify performance degradation prior to its manifestation in clinical images. The methods exploit the temporal nature of detector degradation and spatially segment distinct regions-of-non-uniformity using multi-resolution decomposition. These techniques were tested on synthetic phantom data using different degradation functions, as well as on experimentally acquired time series floods with induced, progressively worsening defects present within the field-of-view. The sensitivity of conventional, global figures-of-merit for detecting changes in uniformity was evaluated and compared to these new image-space techniques. The image-space algorithms provide a reproducible means of detecting regions-of-non-uniformity prior to any single flood image’s having a NEMA uniformity value in excess of 5%. The sensitivity of these image-space algorithms was found to depend on the size and magnitude of the non-uniformities, as well as on the nature of the cause of the non-uniform region. A trend analysis of the conventional figures-of-merit demonstrated their sensitivity to shifts in detector uniformity. The image-space algorithms are computationally efficient. Therefore, the image-space algorithms should be used concomitantly with the trending of the global figures-of-merit in order to provide the reviewer with a richer assessment of gamma camera detector uniformity characteristics.
Resumo:
Indoor localization systems become more interesting for researchers because of the attractiveness of business cases in various application fields. A WiFi-based passive localization system can provide user location information to third-party providers of positioning services. However, indoor localization techniques are prone to multipath and Non-Line Of Sight (NLOS) propagation, which lead to significant performance degradation. To overcome these problems, we provide a passive localization system for WiFi targets with several improved algorithms for localization. Through Software Defined Radio (SDR) techniques, we extract Channel Impulse Response (CIR) information at the physical layer. CIR is later adopted to mitigate the multipath fading problem. We propose to use a Nonlinear Regression (NLR) method to relate the filtered power information to propagation distances, which significantly improves the ranging accuracy compared to the commonly used log-distance path loss model. To mitigate the influence of ranging errors, a new trilateration algorithm is designed as well by combining Weighted Centroid and Constrained Weighted Least Square (WC-CWLS) algorithms. Experiment results show that our algorithm is robust against ranging errors and outperforms the linear least square algorithm and weighted centroid algorithm.
Resumo:
In this work, robustness and stability of continuum damage models applied to material failure in soft tissues are addressed. In the implicit damage models equipped with softening, the presence of negative eigenvalues in the tangent elemental matrix degrades the condition number of the global matrix, leading to a reduction of the computational performance of the numerical model. Two strategies have been adapted from literature to improve the aforementioned computational performance degradation: the IMPL-EX integration scheme [Oliver,2006], which renders the elemental matrix contribution definite positive, and arclength-type continuation methods [Carrera,1994], which allow to capture the unstable softening branch in brittle ruptures. The IMPL-EX integration scheme has as a major drawback the need to use small time steps to keep numerical error below an acceptable value. A convergence study, limiting the maximum allowed increment of internal variables in the damage model, is presented. Finally, numerical simulation of failure problems with fibre reinforced materials illustrates the performance of the adopted methodology.
Resumo:
In this work, robustness and stability of continuum damage models applied to material failure in soft tissues are addressed. In the implicit damage models equipped with softening, the presence of negative eigenvalues in the tangent elemental matrix degrades the condition number of the global matrix, leading to a reduction of the computational performance of the numerical model. Two strategies have been adapted from literature to improve the aforementioned computational performance degradation: the IMPL-EX integration scheme [Oliver,2006], which renders the elemental matrix contribution definite positive, and arclength-type continuation methods [Carrera,1994], which allow to capture the unstable softening branch in brittle ruptures. The IMPL-EX integration scheme has as a major drawback the need to use small time steps to keep numerical error below an acceptable value. A convergence study, limiting the maximum allowed increment of internal variables in the damage model, is presented. Finally, numerical simulation of failure problems with fibre reinforced materials illustrates the performance of the adopted methodology.
Resumo:
All the interconnected regulated systems are prone to impedance-based interactions making them sensitive to instability and transient-performance degradation. The applied control method affects significantly the characteristics of the converter in terms of sensitivity to different impedance interactions. This paper provides for the first time the whole set of impedance-type internal parameters and the formulas according to which the interaction sensitivity can be fully explained and analyzed. The formulation given in this paper can be utilized equally either based on measured frequency responses or on predicted analytic transfer functions. Usually, the distributed dc-dc systems are constructed by using ready-made power modules without having thorough knowledge on the actual power-stage and control-system designs. As a consequence, the interaction characterization has to be based on the frequency responses measureable via the input and output terminals. A buck converter with four different control methods is experimentally characterized in frequency domain to demonstrate the effect of control method on the interaction sensitivity. The presented analytical models are used to explain the phenomena behind the changes in the interaction sensitivity.