836 resultados para Overhead conductors
Resumo:
The commercial process in construction projects is an expensive and highly variable overhead. Collaborative working practices carry many benefits, which are widely disseminated, but little information is available about their costs. Transaction Cost Economics is a theoretical framework that seeks explanations for why there are firms and how the boundaries of firms are defined through the “make-or-buy” decision. However, it is not a framework that offers explanations for the relative costs of procuring construction projects in different ways. The idea that different methods of procurement will have characteristically different costs is tested by way of a survey. The relevance of transaction cost economics to the study of commercial costs in procurement is doubtful. The survey shows that collaborative working methods cost neither more nor less than traditional methods. But the benefits of collaboration mean that there is a great deal of enthusiasm for collaboration rather than competition.
Resumo:
Pullpipelining, a pipeline technique where data is pulled from successor stages from predecessor stages is proposed Control circuits using a synchronous, a semi-synchronous and an asynchronous approach are given. Simulation examples for a DLX generic RISC datapath show that common control pipeline circuit overhead is avoided using the proposal. Applications to linear systolic arrays in cases when computation is finished at early stages in the array are foreseen. This would allow run-time data-driven digital frequency modulation of synchronous pipelined designs. This has applications to implement algorithms exhibiting average-case processing time using a synchronous approach.
Resumo:
The performance benefit when using Grid systems comes from different strategies, among which partitioning the applications into parallel tasks is the most important. However, in most cases the enhancement coming from partitioning is smoothed by the effect of the synchronization overhead, mainly due to the high variability of completion times of the different tasks, which, in turn, is due to the large heterogeneity of Grid nodes. For this reason, it is important to have models which capture the performance of such systems. In this paper we describe a queueing-network-based performance model able to accurately analyze Grid architectures, and we use the model to study a real parallel application executed in a Grid. The proposed model improves the classical modelling techniques and highlights the impact of resource heterogeneity and network latency on the application performance.
Resumo:
We have conducted the first extensive field test of two new methods to retrieve optical properties for overhead clouds that range from patchy to overcast. The methods use measurements of zenith radiance at 673 and 870 nm wavelengths and require the presence of green vegetation in the surrounding area. The test was conducted at the Atmospheric Radiation Measurement Program Oklahoma site during September–November 2004. These methods work because at 673 nm (red) and 870 nm (near infrared (NIR)), clouds have nearly identical optical properties, while vegetated surfaces reflect quite differently. The first method, dubbed REDvsNIR, retrieves not only cloud optical depth τ but also radiative cloud fraction. Because of the 1-s time resolution of our radiance measurements, we are able for the first time to capture changes in cloud optical properties at the natural timescale of cloud evolution. We compared values of τ retrieved by REDvsNIR to those retrieved from downward shortwave fluxes and from microwave brightness temperatures. The flux method generally underestimates τ relative to the REDvsNIR method. Even for overcast but inhomogeneous clouds, differences between REDvsNIR and the flux method can be as large as 50%. In addition, REDvsNIR agreed to better than 15% with the microwave method for both overcast and broken clouds. The second method, dubbed COUPLED, retrieves τ by combining zenith radiances with fluxes. While extra information from fluxes was expected to improve retrievals, this is not always the case. In general, however, the COUPLED and REDvsNIR methods retrieve τ to within 15% of each other.
Resumo:
Deep Brain Stimulation (DBS) has been successfully used throughout the world for the treatment of Parkinson's disease symptoms. To control abnormal spontaneous electrical activity in target brain areas DBS utilizes a continuous stimulation signal. This continuous power draw means that its implanted battery power source needs to be replaced every 18–24 months. To prolong the life span of the battery, a technique to accurately recognize and predict the onset of the Parkinson's disease tremors in human subjects and thus implement an on-demand stimulator is discussed here. The approach is to use a radial basis function neural network (RBFNN) based on particle swarm optimization (PSO) and principal component analysis (PCA) with Local Field Potential (LFP) data recorded via the stimulation electrodes to predict activity related to tremor onset. To test this approach, LFPs from the subthalamic nucleus (STN) obtained through deep brain electrodes implanted in a Parkinson patient are used to train the network. To validate the network's performance, electromyographic (EMG) signals from the patient's forearm are recorded in parallel with the LFPs to accurately determine occurrences of tremor, and these are compared to the performance of the network. It has been found that detection accuracies of up to 89% are possible. Performance comparisons have also been made between a conventional RBFNN and an RBFNN based on PSO which show a marginal decrease in performance but with notable reduction in computational overhead.
Resumo:
The use of expert system techniques in power distribution system design is examined. The selection and siting of equipment on overhead line networks is chosen for investigation as the use of equipment such as auto-reclosers, etc., represents a substantial investment and has a significant effect on the reliability of the system. Through past experience with both equipment and network operations, most decisions in selection and siting of this equipment are made intuitively, following certain general guidelines or rules of thumb. This heuristic nature of the problem lends itself to solution using an expert system approach. A prototype has been developed and is currently under evaluation in the industry. Results so far have demonstrated both the feasibility and benefits of the expert system as a design aid.
Resumo:
This paper proposes a three-shot improvement scheme for the hard-decision based method (HDM), an implementation solution for linear decorrelating detector (LDD) in asynchronous DS/CDMA systems. By taking advantage of the preceding (already reconstructed) bit and the matched filter output for the following two bits, the coupling between temporally adjacent bits (TABs), which always exists for asynchronous systems, is greatly suppressed and the performance of the original HDM is substantially improved. This new scheme requires no signaling overhead yet offers nearly the same performance as those more complicated methods. Also, it can easily accommodate the change in the number of active users in the channel, as no symbol/bit grouping is involved. Finally, the influence of synchronisation errors is investigated.
Resumo:
The Eyjafjallajökull volcano in Iceland erupted explosively on 14 April 2010, emitting a plume of ash into the atmosphere. The ash was transported from Iceland toward Europe where mostly cloud-free skies allowed ground-based lidars at Chilbolton in England and Leipzig in Germany to estimate the mass concentration in the ash cloud as it passed overhead. The UK Met Office's Numerical Atmospheric-dispersion Modeling Environment (NAME) has been used to simulate the evolution of the ash cloud from the Eyjafjallajökull volcano during the initial phase of the ash emissions, 14–16 April 2010. NAME captures the timing and sloped structure of the ash layer observed over Leipzig, close to the central axis of the ash cloud. Relatively small errors in the ash cloud position, probably caused by the cumulative effect of errors in the driving meteorology en route, result in a timing error at distances far from the central axis of the ash cloud. Taking the timing error into account, NAME is able to capture the sloped ash layer over the UK. Comparison of the lidar observations and NAME simulations has allowed an estimation of the plume height time series to be made. It is necessary to include in the model input the large variations in plume height in order to accurately predict the ash cloud structure at long range. Quantitative comparison with the mass concentrations at Leipzig and Chilbolton suggest that around 3% of the total emitted mass is transported as far as these sites by small (<100 μm diameter) ash particles.
Resumo:
Generally classifiers tend to overfit if there is noise in the training data or there are missing values. Ensemble learning methods are often used to improve a classifier's classification accuracy. Most ensemble learning approaches aim to improve the classification accuracy of decision trees. However, alternative classifiers to decision trees exist. The recently developed Random Prism ensemble learner for classification aims to improve an alternative classification rule induction approach, the Prism family of algorithms, which addresses some of the limitations of decision trees. However, Random Prism suffers like any ensemble learner from a high computational overhead due to replication of the data and the induction of multiple base classifiers. Hence even modest sized datasets may impose a computational challenge to ensemble learners such as Random Prism. Parallelism is often used to scale up algorithms to deal with large datasets. This paper investigates parallelisation for Random Prism, implements a prototype and evaluates it empirically using a Hadoop computing cluster.
Resumo:
This paper presents an adaptive frame length mechanism based on a cross-layer analysis of intrinsic relations between the MAC frame length, bit error rate (BER) of the wireless link and normalized goodput. The proposed mechanism selects the optimal frame length that keeps the service normalized goodput at required levels while satisfying the lowest requirement on the BER, thus increasing the transmission reliability. Numerical results are provided and show that an optimal frame length satisfying the lowest BER requirement does indeed exist. The performance of BER requirement as a function of the MAC frame length is evaluated and compared for transmission scenarios with and without automatic repeat request (ARQ). Furthermore, issues related to the MAC overhead length are also discussed to illuminate the functionality and performance of the proposed mechanism.
Resumo:
Exascale systems are the next frontier in high-performance computing and are expected to deliver a performance of the order of 10^18 operations per second using massive multicore processors. Very large- and extreme-scale parallel systems pose critical algorithmic challenges, especially related to concurrency, locality and the need to avoid global communication patterns. This work investigates a novel protocol for dynamic group communication that can be used to remove the global communication requirement and to reduce the communication cost in parallel formulations of iterative data mining algorithms. The protocol is used to provide a communication-efficient parallel formulation of the k-means algorithm for cluster analysis. The approach is based on a collective communication operation for dynamic groups of processes and exploits non-uniform data distributions. Non-uniform data distributions can be either found in real-world distributed applications or induced by means of multidimensional binary search trees. The analysis of the proposed dynamic group communication protocol has shown that it does not introduce significant communication overhead. The parallel clustering algorithm has also been extended to accommodate an approximation error, which allows a further reduction of the communication costs. The effectiveness of the exact and approximate methods has been tested in a parallel computing system with 64 processors and in simulations with 1024 processing elements.
Resumo:
The extraterrestrial solar spectrum (ESS) is an important component in near infrared (near-IR) radiative transfer calculations. However, the impact of a particular choice of the ESS in these regions has been given very little attention. A line-by-line (LBL) transfer model has been used to calculate the absorbed solar irradiance and solar heating rates in the near-IR from 2000-10000 cm−1(1-5 μm) using different ESS. For overhead sun conditions in a mid-latitude summer atmosphere, the absorbed irradiances could differ by up to about 11 Wm−2 (8.2%) while the tropospheric and stratospheric heating rates could differ by up to about 0.13 K day−1 (8.1%) and 0.19 K day−1 (7.6%). The spectral shape of the ESS also has a small but non-negligible impact on these factors in the near-IR.
Resumo:
We present a novel method for retrieving high-resolution, three-dimensional (3-D) nonprecipitating cloud fields in both overcast and broken-cloud situations. The method uses scanning cloud radar and multiwavelength zenith radiances to obtain gridded 3-D liquid water content (LWC) and effective radius (re) and 2-D column mean droplet number concentration (Nd). By using an adaption of the ensemble Kalman filter, radiances are used to constrain the optical properties of the clouds using a forward model that employs full 3-D radiative transfer while also providing full error statistics given the uncertainty in the observations. To evaluate the new method, we first perform retrievals using synthetic measurements from a challenging cumulus cloud field produced by a large-eddy simulation snapshot. Uncertainty due to measurement error in overhead clouds is estimated at 20% in LWC and 6% in re, but the true error can be greater due to uncertainties in the assumed droplet size distribution and radiative transfer. Over the entire domain, LWC and re are retrieved with average error 0.05–0.08 g m-3 and ~2 μm, respectively, depending on the number of radiance channels used. The method is then evaluated using real data from the Atmospheric Radiation Measurement program Mobile Facility at the Azores. Two case studies are considered, one stratocumulus and one cumulus. Where available, the liquid water path retrieved directly above the observation site was found to be in good agreement with independent values obtained from microwave radiometer measurements, with an error of 20 g m-2.
Resumo:
The idea of Sustainable Intensification comes as a response to the challenge of avoiding resources such as land, water and energy being overexploited while increasing food production for an increasing demand from a growing global population. Sustainable Intensification means that farmers need to simultaneously increase yields and sustainably use limited natural resources, such as water. Within the agricultural sector water has a number of uses including irrigation, spraying, drinking for livestock and washing (vegetables, livestock buildings). In order to achieve Sustainable Intensification measures are needed that enable policy makers and managers to inform them about the relative performance of farms as well as of possible ways to improve such performance. We provide a benchmarking tool to assess water use (relative) efficiency at a farm level, suggest pathways to improve farm level productivity by identifying best practices for reducing excessive use of water for irrigation. Data Envelopment Analysis techniques including analysis of returns to scale were used to evaluate any excess in agricultural water use of 66 Horticulture Farms based on different River Basin Catchments across England. We found that farms in the sample can reduce on average water requirements by 35% to achieve the same output (Gross Margin) when compared to their peers on the frontier. In addition, 47% of the farms operate under increasing returns to scale, indicating that farms will need to develop economies of scale to achieve input cost savings. Regarding the adoption of specific water use efficiency management practices, we found that the use of a decision support tool, recycling water and the installation of trickle/drip/spray lines irrigation system has a positive impact on water use efficiency at a farm level whereas the use of other irrigation systems such as the overhead irrigation system was found to have a negative effect on water use efficiency.
Resumo:
This paper shows that radiometer channel radiances for cloudy atmospheric conditions can be simulated with an optimised frequency grid derived under clear-sky conditions. A new clear-sky optimised grid is derived for AVHRR channel 5 ð12 m m, 833 cm �1 Þ. For HIRS channel 11 ð7:33 m m, 1364 cm �1 Þ and AVHRR channel 5, radiative transfer simulations using an optimised frequency grid are compared with simulations using a reference grid, where the optimised grid has roughly 100–1000 times less frequencies than the full grid. The root mean square error between the optimised and the reference simulation is found to be less than 0.3 K for both comparisons, with the magnitude of the bias less than 0.03 K. The simulations have been carried out with the radiative transfer model Atmospheric Radiative Transfer Simulator (ARTS), version 2, using a backward Monte Carlo module for the treatment of clouds. With this module, the optimised simulations are more than 10 times faster than the reference simulations. Although the number of photons is the same, the smaller number of frequencies reduces the overhead for preparing the optical properties for each frequency. With deterministic scattering solvers, the relative decrease in runtime would be even more. The results allow for new radiative transfer applications, such as the development of new retrievals, because it becomes much quicker to carry out a large number of simulations. The conclusions are applicable to any downlooking infrared radiometer.