19 resultados para overhead

em CentAUR: Central Archive University of Reading - UK


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a mechanism to explain suggested links between seismic activity and ionospheric changes detected overhead. Specifically, we explain changes in the natural extremely low-frequency (ELF) radio noise recently observed in the topside ionosphere aboard the DEMETER satellite at night, before major earthquakes. Our mechanism utilises increased electrical conductivity of surface layer air before a major earthquake, which reduces the surface-ionosphere electrical resistance. This increases the vertical fair weather current, and (to maintain continuity of electron flow) lowers the ionosphere. Magnitudes of crucial parameters are estimated and found to be consistent with observations. Natural variability in ionospheric and atmospheric electrical properties is evaluated, and may be overcome using a hybrid detection approach. Suggested experiments to investigate the mechanism involve measuring the cut-off frequency of ELF “tweeks”, the amplitude and phase of very low frequency radio waves in the Earth–ionosphere waveguide, or medium frequency radar, incoherent scatter or rocket studies of the lower ionospheric electron density.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A full assessment of para-­virtualization is important, because without knowledge about the various overheads, users can not understand whether using virtualization is a good idea or not. In this paper we are very interested in assessing the overheads of running various benchmarks on bare-­‐metal, as well as on para-­‐virtualization. The idea is to see what the overheads of para-­‐ virtualization are, as well as looking at the overheads of turning on monitoring and logging. The knowledge from assessing various benchmarks on these different systems will help a range of users understand the use of virtualization systems. In this paper we assess the overheads of using Xen, VMware, KVM and Citrix, see Table 1. These different virtualization systems are used extensively by cloud-­‐users. We are using various Netlib1 benchmarks, which have been developed by the University of Tennessee at Knoxville (UTK), and Oak Ridge National Laboratory (ORNL). In order to assess these virtualization systems, we run the benchmarks on bare-­‐metal, then on the para-­‐virtualization, and finally we turn on monitoring and logging. The later is important as users are interested in Service Level Agreements (SLAs) used by the Cloud providers, and the use of logging is a means of assessing the services bought and used from commercial providers. In this paper we assess the virtualization systems on three different systems. We use the Thamesblue supercomputer, the Hactar cluster and IBM JS20 blade server (see Table 2), which are all servers available at the University of Reading. A functional virtualization system is multi-­‐layered and is driven by the privileged components. Virtualization systems can host multiple guest operating systems, which run on its own domain, and the system schedules virtual CPUs and memory within each Virtual Machines (VM) to make the best use of the available resources. The guest-­‐operating system schedules each application accordingly. You can deploy virtualization as full virtualization or para-­‐virtualization. Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system, where the guest operating systems can run. No modifications are needed in the guest OS or application, e.g. the guest OS or application is not aware of the virtualized environment and runs normally. Para-­‐virualization requires user modification of the guest operating systems, which runs on the virtual machines, e.g. these guest operating systems are aware that they are running on a virtual machine, and provide near-­‐native performance. You can deploy both para-­‐virtualization and full virtualization across various virtualized systems. Para-­‐virtualization is an OS-­‐assisted virtualization; where some modifications are made in the guest operating system to enable better performance. In this kind of virtualization, the guest operating system is aware of the fact that it is running on the virtualized hardware and not on the bare hardware. In para-­‐virtualization, the device drivers in the guest operating system coordinate the device drivers of host operating system and reduce the performance overheads. The use of para-­‐virtualization [0] is intended to avoid the bottleneck associated with slow hardware interrupts that exist when full virtualization is employed. It has revealed [0] that para-­‐ virtualization does not impose significant performance overhead in high performance computing, and this in turn this has implications for the use of cloud computing for hosting HPC applications. The “apparent” improvement in virtualization has led us to formulate the hypothesis that certain classes of HPC applications should be able to execute in a cloud environment, with minimal performance degradation. In order to support this hypothesis, first it is necessary to define exactly what is meant by a “class” of application, and secondly it will be necessary to observe application performance, both within a virtual machine and when executing on bare hardware. A further potential complication is associated with the need for Cloud service providers to support Service Level Agreements (SLA), so that system utilisation can be audited.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

MOTIVATION: The accurate prediction of the quality of 3D models is a key component of successful protein tertiary structure prediction methods. Currently, clustering or consensus based Model Quality Assessment Programs (MQAPs) are the most accurate methods for predicting 3D model quality; however they are often CPU intensive as they carry out multiple structural alignments in order to compare numerous models. In this study, we describe ModFOLDclustQ - a novel MQAP that compares 3D models of proteins without the need for CPU intensive structural alignments by utilising the Q measure for model comparisons. The ModFOLDclustQ method is benchmarked against the top established methods in terms of both accuracy and speed. In addition, the ModFOLDclustQ scores are combined with those from our older ModFOLDclust method to form a new method, ModFOLDclust2, that aims to provide increased prediction accuracy with negligible computational overhead. RESULTS: The ModFOLDclustQ method is competitive with leading clustering based MQAPs for the prediction of global model quality, yet it is up to 150 times faster than the previous version of the ModFOLDclust method at comparing models of small proteins (<60 residues) and over 5 times faster at comparing models of large proteins (>800 residues). Furthermore, a significant improvement in accuracy can be gained over the previous clustering based MQAPs by combining the scores from ModFOLDclustQ and ModFOLDclust to form the new ModFOLDclust2 method, with little impact on the overall time taken for each prediction. AVAILABILITY: The ModFOLDclustQ and ModFOLDclust2 methods are available to download from: http://www.reading.ac.uk/bioinf/downloads/ CONTACT: l.j.mcguffin@reading.ac.uk.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The commercial process in construction projects is an expensive and highly variable overhead. Collaborative working practices carry many benefits, which are widely disseminated, but little information is available about their costs. Transaction Cost Economics is a theoretical framework that seeks explanations for why there are firms and how the boundaries of firms are defined through the “make-or-buy” decision. However, it is not a framework that offers explanations for the relative costs of procuring construction projects in different ways. The idea that different methods of procurement will have characteristically different costs is tested by way of a survey. The relevance of transaction cost economics to the study of commercial costs in procurement is doubtful. The survey shows that collaborative working methods cost neither more nor less than traditional methods. But the benefits of collaboration mean that there is a great deal of enthusiasm for collaboration rather than competition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pullpipelining, a pipeline technique where data is pulled from successor stages from predecessor stages is proposed Control circuits using a synchronous, a semi-synchronous and an asynchronous approach are given. Simulation examples for a DLX generic RISC datapath show that common control pipeline circuit overhead is avoided using the proposal. Applications to linear systolic arrays in cases when computation is finished at early stages in the array are foreseen. This would allow run-time data-driven digital frequency modulation of synchronous pipelined designs. This has applications to implement algorithms exhibiting average-case processing time using a synchronous approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The performance benefit when using Grid systems comes from different strategies, among which partitioning the applications into parallel tasks is the most important. However, in most cases the enhancement coming from partitioning is smoothed by the effect of the synchronization overhead, mainly due to the high variability of completion times of the different tasks, which, in turn, is due to the large heterogeneity of Grid nodes. For this reason, it is important to have models which capture the performance of such systems. In this paper we describe a queueing-network-based performance model able to accurately analyze Grid architectures, and we use the model to study a real parallel application executed in a Grid. The proposed model improves the classical modelling techniques and highlights the impact of resource heterogeneity and network latency on the application performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We have conducted the first extensive field test of two new methods to retrieve optical properties for overhead clouds that range from patchy to overcast. The methods use measurements of zenith radiance at 673 and 870 nm wavelengths and require the presence of green vegetation in the surrounding area. The test was conducted at the Atmospheric Radiation Measurement Program Oklahoma site during September–November 2004. These methods work because at 673 nm (red) and 870 nm (near infrared (NIR)), clouds have nearly identical optical properties, while vegetated surfaces reflect quite differently. The first method, dubbed REDvsNIR, retrieves not only cloud optical depth τ but also radiative cloud fraction. Because of the 1-s time resolution of our radiance measurements, we are able for the first time to capture changes in cloud optical properties at the natural timescale of cloud evolution. We compared values of τ retrieved by REDvsNIR to those retrieved from downward shortwave fluxes and from microwave brightness temperatures. The flux method generally underestimates τ relative to the REDvsNIR method. Even for overcast but inhomogeneous clouds, differences between REDvsNIR and the flux method can be as large as 50%. In addition, REDvsNIR agreed to better than 15% with the microwave method for both overcast and broken clouds. The second method, dubbed COUPLED, retrieves τ by combining zenith radiances with fluxes. While extra information from fluxes was expected to improve retrievals, this is not always the case. In general, however, the COUPLED and REDvsNIR methods retrieve τ to within 15% of each other.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Deep Brain Stimulation (DBS) has been successfully used throughout the world for the treatment of Parkinson's disease symptoms. To control abnormal spontaneous electrical activity in target brain areas DBS utilizes a continuous stimulation signal. This continuous power draw means that its implanted battery power source needs to be replaced every 18–24 months. To prolong the life span of the battery, a technique to accurately recognize and predict the onset of the Parkinson's disease tremors in human subjects and thus implement an on-demand stimulator is discussed here. The approach is to use a radial basis function neural network (RBFNN) based on particle swarm optimization (PSO) and principal component analysis (PCA) with Local Field Potential (LFP) data recorded via the stimulation electrodes to predict activity related to tremor onset. To test this approach, LFPs from the subthalamic nucleus (STN) obtained through deep brain electrodes implanted in a Parkinson patient are used to train the network. To validate the network's performance, electromyographic (EMG) signals from the patient's forearm are recorded in parallel with the LFPs to accurately determine occurrences of tremor, and these are compared to the performance of the network. It has been found that detection accuracies of up to 89% are possible. Performance comparisons have also been made between a conventional RBFNN and an RBFNN based on PSO which show a marginal decrease in performance but with notable reduction in computational overhead.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of expert system techniques in power distribution system design is examined. The selection and siting of equipment on overhead line networks is chosen for investigation as the use of equipment such as auto-reclosers, etc., represents a substantial investment and has a significant effect on the reliability of the system. Through past experience with both equipment and network operations, most decisions in selection and siting of this equipment are made intuitively, following certain general guidelines or rules of thumb. This heuristic nature of the problem lends itself to solution using an expert system approach. A prototype has been developed and is currently under evaluation in the industry. Results so far have demonstrated both the feasibility and benefits of the expert system as a design aid.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes a three-shot improvement scheme for the hard-decision based method (HDM), an implementation solution for linear decorrelating detector (LDD) in asynchronous DS/CDMA systems. By taking advantage of the preceding (already reconstructed) bit and the matched filter output for the following two bits, the coupling between temporally adjacent bits (TABs), which always exists for asynchronous systems, is greatly suppressed and the performance of the original HDM is substantially improved. This new scheme requires no signaling overhead yet offers nearly the same performance as those more complicated methods. Also, it can easily accommodate the change in the number of active users in the channel, as no symbol/bit grouping is involved. Finally, the influence of synchronisation errors is investigated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Eyjafjallajökull volcano in Iceland erupted explosively on 14 April 2010, emitting a plume of ash into the atmosphere. The ash was transported from Iceland toward Europe where mostly cloud-free skies allowed ground-based lidars at Chilbolton in England and Leipzig in Germany to estimate the mass concentration in the ash cloud as it passed overhead. The UK Met Office's Numerical Atmospheric-dispersion Modeling Environment (NAME) has been used to simulate the evolution of the ash cloud from the Eyjafjallajökull volcano during the initial phase of the ash emissions, 14–16 April 2010. NAME captures the timing and sloped structure of the ash layer observed over Leipzig, close to the central axis of the ash cloud. Relatively small errors in the ash cloud position, probably caused by the cumulative effect of errors in the driving meteorology en route, result in a timing error at distances far from the central axis of the ash cloud. Taking the timing error into account, NAME is able to capture the sloped ash layer over the UK. Comparison of the lidar observations and NAME simulations has allowed an estimation of the plume height time series to be made. It is necessary to include in the model input the large variations in plume height in order to accurately predict the ash cloud structure at long range. Quantitative comparison with the mass concentrations at Leipzig and Chilbolton suggest that around 3% of the total emitted mass is transported as far as these sites by small (<100 μm diameter) ash particles.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Generally classifiers tend to overfit if there is noise in the training data or there are missing values. Ensemble learning methods are often used to improve a classifier's classification accuracy. Most ensemble learning approaches aim to improve the classification accuracy of decision trees. However, alternative classifiers to decision trees exist. The recently developed Random Prism ensemble learner for classification aims to improve an alternative classification rule induction approach, the Prism family of algorithms, which addresses some of the limitations of decision trees. However, Random Prism suffers like any ensemble learner from a high computational overhead due to replication of the data and the induction of multiple base classifiers. Hence even modest sized datasets may impose a computational challenge to ensemble learners such as Random Prism. Parallelism is often used to scale up algorithms to deal with large datasets. This paper investigates parallelisation for Random Prism, implements a prototype and evaluates it empirically using a Hadoop computing cluster.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents an adaptive frame length mechanism based on a cross-layer analysis of intrinsic relations between the MAC frame length, bit error rate (BER) of the wireless link and normalized goodput. The proposed mechanism selects the optimal frame length that keeps the service normalized goodput at required levels while satisfying the lowest requirement on the BER, thus increasing the transmission reliability. Numerical results are provided and show that an optimal frame length satisfying the lowest BER requirement does indeed exist. The performance of BER requirement as a function of the MAC frame length is evaluated and compared for transmission scenarios with and without automatic repeat request (ARQ). Furthermore, issues related to the MAC overhead length are also discussed to illuminate the functionality and performance of the proposed mechanism.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Exascale systems are the next frontier in high-performance computing and are expected to deliver a performance of the order of 10^18 operations per second using massive multicore processors. Very large- and extreme-scale parallel systems pose critical algorithmic challenges, especially related to concurrency, locality and the need to avoid global communication patterns. This work investigates a novel protocol for dynamic group communication that can be used to remove the global communication requirement and to reduce the communication cost in parallel formulations of iterative data mining algorithms. The protocol is used to provide a communication-efficient parallel formulation of the k-means algorithm for cluster analysis. The approach is based on a collective communication operation for dynamic groups of processes and exploits non-uniform data distributions. Non-uniform data distributions can be either found in real-world distributed applications or induced by means of multidimensional binary search trees. The analysis of the proposed dynamic group communication protocol has shown that it does not introduce significant communication overhead. The parallel clustering algorithm has also been extended to accommodate an approximation error, which allows a further reduction of the communication costs. The effectiveness of the exact and approximate methods has been tested in a parallel computing system with 64 processors and in simulations with 1024 processing elements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The extraterrestrial solar spectrum (ESS) is an important component in near infrared (near-IR) radiative transfer calculations. However, the impact of a particular choice of the ESS in these regions has been given very little attention. A line-by-line (LBL) transfer model has been used to calculate the absorbed solar irradiance and solar heating rates in the near-IR from 2000-10000 cm−1(1-5 μm) using different ESS. For overhead sun conditions in a mid-latitude summer atmosphere, the absorbed irradiances could differ by up to about 11 Wm−2 (8.2%) while the tropospheric and stratospheric heating rates could differ by up to about 0.13 K day−1 (8.1%) and 0.19 K day−1 (7.6%). The spectral shape of the ESS also has a small but non-negligible impact on these factors in the near-IR.