978 resultados para Weighted average


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pricing is an effective tool to control congestion and achieve quality of service (QoS) provisioning for multiple differentiated levels of service. In this paper, we consider the problem of pricing for congestion control in the case of a network of nodes under a single service class and multiple queues, and present a multi-layered pricing scheme. We propose an algorithm for finding the optimal state dependent price levels for individual queues, at each node. The pricing policy used depends on a weighted average queue length at each node. This helps in reducing frequent price variations and is in the spirit of the random early detection (RED) mechanism used in TCP/IP networks. We observe in our numerical results a considerable improvement in performance using our scheme over that of a recently proposed related scheme in terms of both throughput and delay performance. In particular, our approach exhibits a throughput improvement in the range of 34 to 69 percent in all cases studied (over all routes) over the above scheme.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pricing is an effective tool to control congestion and achieve quality of service (QoS) provisioning for multiple differentiated levels of service. In this paper, we consider the problem of pricing for congestion control in the case of a network of nodes under a single service class and multiple queues, and present a multi-layered pricing scheme. We propose an algorithm for finding the optimal state dependent price levels for individual queues, at each node. The pricing policy used depends on a weighted average queue length at each node. This helps in reducing frequent price variations and is in the spirit of the random early detection (RED) mechanism used in TCP/IP networks. We observe in our numerical results a considerable improvement in performance using our scheme over that of a recently proposed related scheme in terms of both throughput and delay performance. In particular, our approach exhibits a throughput improvement in the range of 34 to 69 percent in all cases studied (over all routes) over the above scheme.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wireless Sensor Networks (WSNs) have many application scenarios where external clock synchronisation may be required because a WSN may consist of components which are not connected to each other. In this paper, we first propose a novel weighted average-based internal clock synchronisation (WICS) protocol, which synchronises all the clocks of a WSN with the clock of a reference node periodically. Based on this protocol, we then propose our weighted average-based external clock synchronisation (WECS) protocol. We have analysed the proposed protocols for maximum synchronisation error and shown that it is always upper bounded. Extensive simulation studies of the proposed protocols have been carried out using Castalia simulator. Simulation results validate our above theoretical claim and also show that the proposed protocols perform better in comparison to other protocols in terms of synchronisation accuracy. A prototype implementation of the WICS protocol using a few TelosB motes also validates the above conclusions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Clock synchronization is an extremely important requirement of wireless sensor networks(WSNs). There are many application scenarios such as weather monitoring and forecasting etc. where external clock synchronization may be required because WSN itself may consists of components which are not connected to each other. A usual approach for external clock synchronization in WSNs is to synchronize the clock of a reference node with an external source such as UTC, and the remaining nodes synchronize with the reference node using an internal clock synchronization protocol. In order to provide highly accurate time, both the offset and the drift rate of each clock with respect to reference node are estimated from time to time, and these are used for getting correct time from local clock reading. A problem with this approach is that it is difficult to estimate the offset of a clock with respect to the reference node when drift rate of clocks varies over a period of time. In this paper, we first propose a novel internal clock synchronization protocol based on weighted averaging technique, which synchronizes all the clocks of a WSN to a reference node periodically. We call this protocol weighted average based internal clock synchronization(WICS) protocol. Based on this protocol, we then propose our weighted average based external clock synchronization(WECS) protocol. We have analyzed the proposed protocols for maximum synchronization error and shown that it is always upper bounded. Extensive simulation studies of the proposed protocols have been carried out using Castalia simulator. Simulation results validate our theoretical claim that the maximum synchronization error is always upper bounded and also show that the proposed protocols perform better in comparison to other protocols in terms of synchronization accuracy. A prototype implementation of the proposed internal clock synchronization protocol using a few TelosB motes also validates our claim.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This study challenges two core conventional meta-analysis methods: fixed effect and random effects. We show how and explain why an unrestricted weighted least squares estimator is superior to conventional random-effects meta-analysis when there is publication (or small-sample) bias and better than a fixed-effect weighted average if there is heterogeneity. Statistical theory and simulations of effect sizes, log odds ratios and regression coefficients demonstrate that this unrestricted weighted least squares estimator provides satisfactory estimates and confidence intervals that are comparable to random effects when there is no publication (or small-sample) bias and identical to fixed-effect meta-analysis when there is no heterogeneity. When there is publication selection bias, the unrestricted weighted least squares approach dominates random effects; when there is excess heterogeneity, it is clearly superior to fixed-effect meta-analysis. In practical applications, an unrestricted weighted least squares weighted average will often provide superior estimates to both conventional fixed and random effects.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The wavelet packet transform decomposes a signal into a set of bases for time–frequency analysis. This decomposition creates an opportunity for implementing distributed data mining where features are extracted from different wavelet packet bases and served as feature vectors for applications. This paper presents a novel approach for integrated machine fault diagnosis based on localised wavelet packet bases of vibration signals. The best basis is firstly determined according to its classification capability. Data mining is then applied to extract features and local decisions are drawn using Bayesian inference. A final conclusion is reached using a weighted average method in data fusion. A case study on rolling element bearing diagnosis shows that this approach can greatly improve the accuracy ofdiagno sis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose: The purpose of this review was to present an in-depth analysis of literature identifying the extent of dropout from Internet-based treatment programmes for psychological disorders, and literature exploring the variables associated with dropout from such programmes. ----- ----- Methods: A comprehensive literature search was conducted on PSYCHINFO and PUBMED with the keywords: dropouts, drop out, dropout, dropping out, attrition, premature termination, termination, non-compliance, treatment, intervention, and program, each in combination with the key words Internet and web. A total of 19 studies published between 1990 and April 2009 and focusing on dropout from Internet-based treatment programmes involving minimal therapist contact were identified and included in the review. ----- ----- Results: Dropout ranged from 2 to 83% and a weighted average of 31% of the participants dropped out of treatment. A range of variables have been examined for their association with dropout from Internet-based treatment programmes for psychological disorders. Despite the numerous variables explored, evidence on any specific variables that may make an individual more likely to drop out of Internet-based treatment is currently limited. ----- ----- Conclusions: This review highlights the need for more rigorous and theoretically guided research exploring the variables associated with dropping out of Internet-based treatment for psychological disorders.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

While recent research has provided valuable information as to the composition of laser printer particles, their formation mechanisms, and explained why some printers are emitters whilst others are low emitters, fundamental questions relating to the potential exposure of office workers remained unanswered. In particular, (i) what impact does the operation of laser printers have on the background particle number concentration (PNC) of an office environment over the duration of a typical working day?; (ii) what is the airborne particle exposure to office workers in the vicinity of laser printers; (iii) what influence does the office ventilation have upon the transport and concentration of particles?; (iv) is there a need to control the generation of, and/or transport of particles arising from the operation of laser printers within an office environment?; (v) what instrumentation and methodology is relevant for characterising such particles within an office location? We present experimental evidence on printer temporal and spatial PNC during the operation of 107 laser printers within open plan offices of five buildings. We show for the first time that the eight-hour time-weighted average printer particle exposure is significantly less than the eight-hour time-weighted local background particle exposure, but that peak printer particle exposure can be greater than two orders of magnitude higher than local background particle exposure. The particle size range is predominantly ultrafine (< 100nm diameter). In addition we have established that office workers are constantly exposed to non-printer derived particle concentrations, with up to an order of magnitude difference in such exposure amongst offices, and propose that such exposure be controlled along with exposure to printer derived particles. We also propose, for the first time, that peak particle reference values be calculated for each office area analogous to the criteria used in Australia and elsewhere for evaluating exposure excursion above occupational hazardous chemical exposure standards. A universal peak particle reference value of 2.0 x 104 particles cm-3 has been proposed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Forecasts generated by time series models traditionally place greater weight on more recent observations. This paper develops an alternative semi-parametric method for forecasting that does not rely on this convention and applies it to the problem of forecasting asset return volatility. In this approach, a forecast is a weighted average of historical volatility, with the greatest weight given to periods that exhibit similar market conditions to the time at which the forecast is being formed. Weighting is determined by comparing short-term trends in volatility across time (as a measure of market conditions) by means of a multivariate kernel scheme. It is found that the semi-parametric method produces forecasts that are significantly more accurate than a number of competing approaches at both short and long forecast horizons.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, a hybrid smoothed finite element method (H-SFEM) is developed for solid mechanics problems by combining techniques of finite element method (FEM) and Node-based smoothed finite element method (NS-FEM) using a triangular mesh. A parameter is equipped into H-SFEM, and the strain field is further assumed to be the weighted average between compatible stains from FEM and smoothed strains from NS-FEM. We prove theoretically that the strain energy obtained from the H-SFEM solution lies in between those from the compatible FEM solution and the NS-FEM solution, which guarantees the convergence of H-SFEM. Intensive numerical studies are conducted to verify these theoretical results and show that (1) the upper and lower bound solutions can always be obtained by adjusting ; (2) there exists a preferable at which the H-SFEM can produce the ultrasonic accurate solution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Data in germplasm collections contain a mixture of data types; binary, multistate and quantitative. Given the multivariate nature of these data, the pattern analysis methods of classification and ordination have been identified as suitable techniques for statistically evaluating the available diversity. The proximity (or resemblance) measure, which is in part the basis of the complementary nature of classification and ordination techniques, is often specific to particular data types. The use of a combined resemblance matrix has an advantage over data type specific proximity measures. This measure accommodates the different data types without manipulating them to be of a specific type. Descriptors are partitioned into their data types and an appropriate proximity measure is used on each. The separate proximity matrices, after range standardisation, are added as a weighted average and the combined resemblance matrix is then used for classification and ordination. Germplasm evaluation data for 831 accessions of groundnut (Arachis hypogaea L.) from the Australian Tropical Field Crops Genetic Resource Centre, Biloela, Queensland were examined. Data for four binary, five ordered multistate and seven quantitative descriptors have been documented. The interpretative value of different weightings - equal and unequal weighting of data types to obtain a combined resemblance matrix - was investigated by using principal co-ordinate analysis (ordination) and hierarchical cluster analysis. Equal weighting of data types was found to be more valuable for these data as the results provided a greater insight into the patterns of variability available in the Australian groundnut germplasm collection. The complementary nature of pattern analysis techniques enables plant breeders to identify relevant accessions in relation to the descriptors which distinguish amongst them. This additional information may provide plant breeders with a more defined entry point into the germplasm collection for identifying sources of variability for their plant improvement program, thus improving the utilisation of germplasm resources.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Rating systems are used by many websites, which allow customers to rate available items according to their own experience. Subsequently, reputation models are used to aggregate available ratings in order to generate reputation scores for items. A problem with current reputation models is that they provide solutions to enhance accuracy of sparse datasets not thinking of their models performance over dense datasets. In this paper, we propose a novel reputation model to generate more accurate reputation scores for items using any dataset; whether it is dense or sparse. Our proposed model is described as a weighted average method, where the weights are generated using the normal distribution. Experiments show promising results for the proposed model over state-of-the-art ones on sparse and dense datasets.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many websites offer the opportunity for customers to rate items and then use customers' ratings to generate items reputation, which can be used later by other users for decision making purposes. The aggregated value of the ratings per item represents the reputation of this item. The accuracy of the reputation scores is important as it is used to rank items. Most of the aggregation methods didn't consider the frequency of distinct ratings and they didn't test how accurate their reputation scores over different datasets with different sparsity. In this work we propose a new aggregation method which can be described as a weighted average, where weights are generated using the normal distribution. The evaluation result shows that the proposed method outperforms state-of-the-art methods over different sparsity datasets.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Land-use regression (LUR) is a technique that can improve the accuracy of air pollution exposure assessment in epidemiological studies. Most LUR models are developed for single cities, which places limitations on their applicability to other locations. We sought to develop a model to predict nitrogen dioxide (NO2) concentrations with national coverage of Australia by using satellite observations of tropospheric NO2 columns combined with other predictor variables. We used a generalised estimating equation (GEE) model to predict annual and monthly average ambient NO2 concentrations measured by a national monitoring network from 2006 through 2011. The best annual model explained 81% of spatial variation in NO2 (absolute RMS error=1.4 ppb), while the best monthly model explained 76% (absolute RMS error=1.9 ppb). We applied our models to predict NO2 concentrations at the ~350,000 census mesh blocks across the country (a mesh block is the smallest spatial unit in the Australian census). National population-weighted average concentrations ranged from 7.3 ppb (2006) to 6.3 ppb (2011). We found that a simple approach using tropospheric NO2 column data yielded models with slightly better predictive ability than those produced using a more involved approach that required simulation of surface-to-column ratios. The models were capable of capturing within-urban variability in NO2, and offer the ability to estimate ambient NO2 concentrations at monthly and annual time scales across Australia from 2006–2011. We are making our model predictions freely available for research.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Occupational standards concerning allowable concentrations of chemical compounds in the ambient air of workplaces have been established in several countries worldwide. With the integration of the European Union (EU), there has been a need of establishing harmonised Occupational Exposure Limits (OEL). The European Commission Directive 95/320/EC of 12 July 1995 has given the tasks to a Scientific Committee for Occupational Exposure Limits (SCOEL) to propose, based on scientific data and where appropriate, occupational limit values which may include the 8-h time-weighted average (TWA), short-term limits/excursion limits (STEL) and Biological Limit Values (BLVs). In 2000, the European Union issued a list of 62 chemical substances with Occupational Exposure Limits. Of these, 25 substances received a "skin" notation, indicating that toxicologically significant amounts may be taken up via the skin. For such substances, monitoring of concentrations in ambient air may not be sufficient, and biological monitoring strategies appear of potential importance in the medical surveillance of exposed workers. Recent progress has been made with respect to formulation of a strategy related to health-based BLVs.