995 resultados para throughput evaluation


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The popular technologies Wi-Fi and WiMAX for realization of WLAN and WMAN respectively are much different, but they could compliment each other providing competitive wireless access for voice traffic. The article develops the idea of WLAN/WMAN (Wi-Fi/WiMAX) integration. WiMAX is offering a backup for the traffic overflowing from Wi-Fi cells located into the WiMAX cell. Overflow process is improved by proposed rearrangement control algorithm applied to the Wi-Fi voice calls. There are also proposed analytical models for system throughput evaluation and verification of the effectiveness using WMAN as a backup for WLAN overflow traffic and the proposed call rearrangement algorithm as well.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Vehicular ad hoc network (VANET) applications are principally categorized into safety and commercial applications. Efficient traffic management for routing an emergency vehicle is of paramount importance in safety applications of VANETs. In the first case, a typical example of a high dense urban scenario is considered to demonstrate the role of penetration ratio for achieving reduced travel time between source and destination points. The major requirement for testing these VANET applications is a realistic simulation approach which would justify the results prior to actual deployment. A Traffic Simulator coupled with a Network Simulator using a feedback loop feature is apt for realistic simulation of VANETs. Thus, in this paper, we develop the safety application using traffic control interface (TraCI), which couples SUMO (traffic simulator) and NS2 (network simulator). Likewise, the mean throughput is one of the necessary performance measures for commercial applications of VANETs. In the next case, commercial applications have been considered wherein the data is transferred amongst vehicles (V2V) and between roadside infrastructure and vehicles (I2V), for which the throughput is assessed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Popular wireless networks, such as IEEE 802.11/15/16, are not designed for real-time applications. Thus, supporting real-time quality of service (QoS) in wireless real-time control is challenging. This paper adopts the widely used IEEE 802.11, with the focus on its distributed coordination function (DCF), for soft-real-time control systems. The concept of the critical real-time traffic condition is introduced to characterize the marginal satisfaction of real-time requirements. Then, mathematical models are developed to describe the dynamics of DCF based real-time control networks with periodic traffic, a unique feature of control systems. Performance indices such as throughput and packet delay are evaluated using the developed models, particularly under the critical real-time traffic condition. Finally, the proposed modelling is applied to traffic rate control for cross-layer networked control system design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As computational models in fields such as medicine and engineering get more refined, resource requirements are increased. In a first instance, these needs have been satisfied using parallel computing and HPC clusters. However, such systems are often costly and lack flexibility. HPC users are therefore tempted to move to elastic HPC using cloud services. One difficulty in making this transition is that HPC and cloud systems are different, and performance may vary. The purpose of this study is to evaluate cloud services as a means to minimise both cost and computation time for large-scale simulations, and to identify which system properties have the most significant impact on performance. Our simulation results show that, while the performance of Virtual CPU (VCPU) is satisfactory, network throughput may lead to difficulties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-throughput techniques are necessary to efficiently screen potential lignocellulosic feedstocks for the production of renewable fuels, chemicals, and bio-based materials, thereby reducing experimental time and expense while supplanting tedious, destructive methods. The ratio of lignin syringyl (S) to guaiacyl (G) monomers has been routinely quantified as a way to probe biomass recalcitrance. Mid-infrared and Raman spectroscopy have been demonstrated to produce robust partial least squares models for the prediction of lignin S/G ratios in a diverse group of Acacia and eucalypt trees. The most accurate Raman model has now been used to predict the S/G ratio from 269 unknown Acacia and eucalypt feedstocks. This study demonstrates the application of a partial least squares model composed of Raman spectral data and lignin S/G ratios measured using pyrolysis/molecular beam mass spectrometry (pyMBMS) for the prediction of S/G ratios in an unknown data set. The predicted S/G ratios calculated by the model were averaged according to plant species, and the means were not found to differ from the pyMBMS ratios when evaluating the mean values of each method within the 95 % confidence interval. Pairwise comparisons within each data set were employed to assess statistical differences between each biomass species. While some pairwise appraisals failed to differentiate between species, Acacias, in both data sets, clearly display significant differences in their S/G composition which distinguish them from eucalypts. This research shows the power of using Raman spectroscopy to supplant tedious, destructive methods for the evaluation of the lignin S/G ratio of diverse plant biomass materials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Clarification performance and flocculant dosage is strongly linked to the mud solids loading in the feed entering the clarifier. The recycle of filtrate can represent an extra ~10-15% mud solids loading on the clarifier, thereby reducing its effective capacity. Filtrate recycling may cause significant increase in turbidity, complexed calcium ion formation, phosphate, proteins and polysaccharides in mixed juice that impact on evaporator scale formation and molasses exhaustion. The paper details the results obtained from laboratory, pilot scale and factory trials of filtrate clarification using both sedimentation and flotation methods. Clarified filtrate could be produced of similar quality to ESJ. Filtrate clarification was able to significantly remove insoluble solids, turbidity, phosphate, and polysaccharides content with slight reductions in minerals content of the filtrate. On the basis of improved filtrate quality, the clarified filtrate could be directed to ESJ, instead of the normal practice of directing the mud filtrate to mixed juice. The potential impacts of implementing filtrate clarification are discussed in respect to improved performance and throughput of the clarification station.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We extend the modeling heuristic of (Harsha et al. 2006. In IEEE IWQoS 06, pp 178 - 187) to evaluate the performance of an IEEE 802.11e infrastructure network carrying packet telephone calls, streaming video sessions and TCP controlled file downloads, using Enhanced Distributed Channel Access (EDCA). We identify the time boundaries of activities on the channel (called channel slot boundaries) and derive a Markov Renewal Process of the contending nodes on these epochs. This is achieved by the use of attempt probabilities of the contending nodes as those obtained from the saturation fixed point analysis of (Ramaiyan et al. 2005. In Proceedings ACM Sigmetrics, `05. Journal version accepted for publication in IEEE TON). Regenerative analysis on this MRP yields the desired steady state performance measures. We then use the MRP model to develop an effective bandwidth approach for obtaining a bound on the size of the buffer required at the video queue of the AP, such that the streaming video packet loss probability is kept to less than 1%. The results obtained match well with simulations using the network simulator, ns-2. We find that, with the default IEEE 802.11e EDCA parameters for access categories AC 1, AC 2 and AC 3, the voice call capacity decreases if even one streaming video session and one TCP file download are initiated by some wireless station. Subsequently, reducing the voice calls increases the video downlink stream throughput by 0.38 Mbps and file download capacity by 0.14 Mbps, for every voice call (for the 11 Mbps PHY). We find that a buffer size of 75KB is sufficient to ensure that the video packet loss probability at the QAP is within 1%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the context of the IEEE 802.11e standard for WLANs, we provide an analytical model for obtaining the maximum number of VoIP calls that can be supported on HCCA, such that the delay QoS constraint of the accepted calls is met, when TCP downloads are coexistent on EDCA. In this scenario, we derive the TCP download throughput by using an analytical model for the case where only TCP sessions are present in the WLAN. We show that the analytical model for combined voice and TCP transfers provides accurate results in comparison with simulations (using ns-2).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider several WLAN stations associated at rates r(1), r(2), ... r(k) with an Access Point. Each station (STA) is downloading a long file from a local server, located on the LAN to which the Access Point (AP) is attached, using TCP. We assume that a TCP ACK will be produced after the reception of d packets at an STA. We model these simultaneous TCP-controlled transfers using a semi-Markov process. Our analytical approach leads to a procedure to compute aggregate download, as well as per-STA throughputs, numerically, and the results match simulations very well. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Orthogonal frequency-division multiple access (OFDMA) systems divide the available bandwidth into orthogonal subchannels and exploit multiuser diversity and frequency selectivity to achieve high spectral efficiencies. However, they require a significant amount of channel state feedback for scheduling and rate adaptation and are sensitive to feedback delays. We develop a comprehensive analysis for OFDMA system throughput in the presence of feedback delays as a function of the feedback scheme, frequency-domain scheduler, and rate adaptation rule. Also derived are expressions for the outage probability, which captures the inability of a subchannel to successfully carry data due to the feedback scheme or feedback delays. Our model encompasses the popular best-n and threshold-based feedback schemes and the greedy, proportional fair, and round-robin schedulers that cover a wide range of throughput versus fairness tradeoff. It helps quantify the different robustness of the schedulers to feedback overhead and delays. Even at low vehicular speeds, it shows that small feedback delays markedly degrade the throughput and increase the outage probability. Further, given the feedback delay, the throughput degradation depends primarily on the feedback overhead and not on the feedback scheme itself. We also show how to optimize the rate adaptation thresholds as a function of feedback delay.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Orthogonal frequency division multiple access (OFDMA) systems exploit multiuser diversity and frequency-selectivity to achieve high spectral efficiencies. However, they require considerable feedback for scheduling and rate adaptation, and are sensitive to feedback delays. We develop a comprehensive analysis of the OFDMA system throughput as a function of the feedback scheme, frequency-domain scheduler, and discrete rate adaptation rule in the presence of feedback delays. We analyze the popular best-n and threshold-based feedback schemes. We show that for both the greedy and round-robin schedulers, the throughput degradation, given a feedback delay, depends primarily on the fraction of feedback reduced by the feedback scheme and not the feedback scheme itself. Even small feedback delays at low vehicular speeds are shown to significantly degrade the throughput. We also show that optimizing the link adaptation thresholds as a function of the feedback delay can effectively counteract the detrimental effect of delays.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The potential of the 18S rRNA V9 metabarcoding approach for diet assessment was explored using MiSeq paired-end (PE; 2 9 150 bp) technology. To critically evaluate the method's performance with degraded/digested DNA, the diets of two zooplanktivorous fish species from the Bay of Biscay, European sardine (Sardina pilchardus) and European sprat (Sprattus sprattus), were analysed. The taxonomic resolution and quantitative potential of the 18S V9 metabarcoding was first assessed both in silico and with mock and field plankton samples. Our method was capable of discriminating species within the reference database in a reliable way providing there was at least one variable position in the 18S V9 region. Furthermore, it successfully discriminated diet between both fish species, including habitat and diel differences among sardines, overcoming some of the limitations of traditional visual-based diet analysis methods. The high sensitivity and semi-quantitative nature of the 18S V9 metabarcoding approach was supported by both visual microscopy and qPCR-based results. This molecular approach provides an alternative cost and time effective tool for food-web analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper experimentally demonstrates that, for two representative indoor distributed antenna system (DAS) scenarios, existing radio-over-fiber (RoF) DAS installations can enhance the capacity advantages of broadband 3 × 3 multiple-input-multiple-output (MIMO) radio services without requiring additional fibers or multiplexing schemes. This is true for both single-and multiple-user cases with a single base station and multiple base stations. First, a theoretical example is used to illustrate that there is a negligible improvement in signal-to-noise ratio (SNR) when using a MIMO DAS with all N spatial streams replicated at N RAUs, compared with a MIMO DAS with only one of the N streams replicated at each RAU for N ≤ 4. It is then experimentally confirmed that a 3 × 3 MIMO DAS offers improved capacity and throughput compared with a 3 × 3 MIMO collocated antenna system (CAS) for the single-user case in two typical indoor DAS scenarios, i.e., one with significant line-of-sight (LOS) propagation and the other with entirely non-line-of-sight (NLOS) propagation. The improvement in capacity is 3.2% and 4.1%, respectively. Then, experimental channel measurements confirm that there is a negligible capacity increase in the 3 × 3 configuration with three spatial streams per antenna unit over the 3 × 3 configuration with a single spatial stream per antenna unit. The former layout is observed to provide an increase of ∼1% in the median channel capacity in both the single-and multiple-user scenarios. With 20 users and three base stations, a MIMO DAS using the latter layout offers median aggregate capacities of 259 and 233 bit/s/Hz for the LOS and NLOS scenarios, respectively. It is concluded that DAS installations can further enhance the capacity offered to multiple users by multiple 3 × 3 MIMO-enabled base stations. Further, designing future DAS systems to support broadband 3 × 3 MIMO systems may not require significant upgrades to existing installations for small numbers of spatial streams. © 2013 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Version 1.1 of the Hyper Text Transfer Protocol (HTTP) was principally developed as a means for reducing both document transfer latency and network traffic. The rationale for the performance enhancements in HTTP/1.1 is based on the assumption that the network is the bottleneck in Web transactions. In practice, however, the Web server can be the primary source of document transfer latency. In this paper, we characterize and compare the performance of HTTP/1.0 and HTTP/1.1 in terms of throughput at the server and transfer latency at the client. Our approach is based on considering a broader set of bottlenecks in an HTTP transfer; we examine how bottlenecks in the network, CPU, and in the disk system affect the relative performance of HTTP/1.0 versus HTTP/1.1. We show that the network demands under HTTP/1.1 are somewhat lower than HTTP/1.0, and we quantify those differences in terms of packets transferred, server congestion window size and data bytes per packet. We show that when the CPU is the bottleneck, there is relatively little difference in performance between HTTP/1.0 and HTTP/1.1. Surprisingly, we show that when the disk system is the bottleneck, performance using HTTP/1.1 can be much worse than with HTTP/1.0. Based on these observations, we suggest a connection management policy for HTTP/1.1 that can improve throughput, decrease latency, and keep network traffic low when the disk system is the bottleneck.