975 resultados para Over adaptation
Resumo:
There have been several studies on the performance of TCP controlled transfers over an infrastructure IEEE 802.11 WLAN, assuming perfect channel conditions. In this paper, we develop an analytical model for the throughput of TCP controlled file transfers over the IEEE 802.11 DCF with different packet error probabilities for the stations, accounting for the effect of packet drops on the TCP window. Our analysis proceeds by combining two models: one is an extension of the usual TCP-over-DCF model for an infrastructure WLAN, where the throughput of a station depends on the probability that the head-of-the-line packet at the Access Point belongs to that station; the second is a model for the TCP window process for connections with different drop probabilities. Iterative calculations between these models yields the head-of-the-line probabilities, and then, performance measures such as the throughputs and packet failure probabilities can be derived. We find that, due to MAC layer retransmissions, packet losses are rare even with high channel error probabilities and the stations obtain fair throughputs even when some of them have packet error probabilities as high as 0.1 or 0.2. For some restricted settings we are also able to model tail-drop loss at the AP. Although involving many approximations, the model captures the system behavior quite accurately, as compared with simulations.
Resumo:
In this paper, we consider a distributed function computation setting, where there are m distributed but correlated sources X1,...,Xm and a receiver interested in computing an s-dimensional subspace generated by [X1,...,Xm]Γ for some (m × s) matrix Γ of rank s. We construct a scheme based on nested linear codes and characterize the achievable rates obtained using the scheme. The proposed nested-linear-code approach performs at least as well as the Slepian-Wolf scheme in terms of sum-rate performance for all subspaces and source distributions. In addition, for a large class of distributions and subspaces, the scheme improves upon the Slepian-Wolf approach. The nested-linear-code scheme may be viewed as uniting under a common framework, both the Korner-Marton approach of using a common linear encoder as well as the Slepian-Wolf approach of employing different encoders at each source. Along the way, we prove an interesting and fundamental structural result on the nature of subspaces of an m-dimensional vector space V with respect to a normalized measure of entropy. Here, each element in V corresponds to a distinct linear combination of a set {Xi}im=1 of m random variables whose joint probability distribution function is given.
A dynamic bandwidth allocation scheme for interactive multimedia applications over cellular networks
Resumo:
Cellular networks played key role in enabling high level of bandwidth for users by employing traditional methods such as guaranteed QoS based on application category at radio access stratum level for various classes of QoSs. Also, the newer multimode phones (e.g., phones that support LTE (Long Term Evolution standard), UMTS, GSM, WIFI all at once) are capable to use multiple access methods simulta- neously and can perform seamless handover among various supported technologies to remain connected. With various types of applications (including interactive ones) running on these devices, which in turn have different QoS requirements, this work discusses as how QoS (measured in terms of user level response time, delay, jitter and transmission rate) can be achieved for interactive applications using dynamic bandwidth allocation schemes over cellular networks. In this work, we propose a dynamic bandwidth allocation scheme for interactive multimedia applications with/without background load in the cellular networks. The system has been simulated for many application types running in parallel and it has been observed that if interactive applications are to be provided with decent response time, a periodic overhauling of policy at admission control has to be done by taking into account history, criticality of applications. The results demonstrate that interactive appli- cations can be provided with good service if policy database at admission control is reviewed dynamically.
Resumo:
In this paper we present a hardware-software hybrid technique for modular multiplication over large binary fields. The technique involves application of Karatsuba-Ofman algorithm for polynomial multiplication and a novel technique for reduction. The proposed reduction technique is based on the popular repeated multiplication technique and Barrett reduction. We propose a new design of a parallel polynomial multiplier that serves as a hardware accelerator for large field multiplications. We show that the proposed reduction technique, accelerated using the modified polynomial multiplier, achieves significantly higher performance compared to a purely software technique and other hybrid techniques. We also show that the hybrid accelerated approach to modular field multiplication is significantly faster than the Montgomery algorithm based integrated multiplication approach.
Resumo:
Certain parts of the State of Nagaland situated in the northeastern region of India have been experiencing rainfall deficit over the past few years leading to severe drought-like conditions, which is likely to be aggravated under a climate change scenario. The state has already incurred considerable losses in the agricultural sector. Regional vulnerability assessments need to be carried out in order to help policy makers and planners formulate and implement effective drought management strategies. The present study uses an 'index-based approach' to quantify the climate variability-induced vulnerability of farmers in five villages of Dimapur district, Nagaland. Indicators, which are reflective of the exposure, sensitivity and adaptive capacity of the farmers to drought, were quantified on the basis of primary data generated through household surveys and participatory rural appraisal supplemented by secondary data in order to calculate a composite vulnerability index. The composite vulnerability index of village New Showba was found to be the least, while Zutovi, the highest. The overall results reveal that biophysical characteristics contribute the most to overall vulnerability. Some potential adaptation strategies were also identified based on observations and discussions with the villagers.
Resumo:
Multi-view head-pose estimation in low-resolution, dynamic scenes is difficult due to blurred facial appearance and perspective changes as targets move around freely in the environment. Under these conditions, acquiring sufficient training examples to learn the dynamic relationship between position, face appearance and head-pose can be very expensive. Instead, a transfer learning approach is proposed in this work. Upon learning a weighted-distance function from many examples where the target position is fixed, we adapt these weights to the scenario where target positions are varying. The adaptation framework incorporates reliability of the different face regions for pose estimation under positional variation, by transforming the target appearance to a canonical appearance corresponding to a reference scene location. Experimental results confirm effectiveness of the proposed approach, which outperforms state-of-the-art by 9.5% under relevant conditions. To aid further research on this topic, we also make DPOSE- a dynamic, multi-view head-pose dataset with ground-truth publicly available with this paper.
Resumo:
Climate change has great significance globally in general and South Asia in particular. Here we have used data from a network of 35 aerosol observatories over the Indian region to generate the first time regional synthesis using primary data and estimate the aerosol trends. On an average, aerosol optical depth (AOD) was found increasing at a rate of 2.3% (of its value in 1985) per year and more rapidly (similar to 4%) during the last decade. If the trends continue so, AOD at several locations would nearly double and approach unity in the next few decades leading to an enhancement in aerosol-induced lower atmospheric warming by a factor of two. However, a regionally averaged scenario can be ascertained only in the coming years, when longer and denser data would become available. The regional and global climate implications of such trends in the forcing elements need to be better assessed using GCMs.
Resumo:
Using continuous and near-real time measurements of the mass concentrations of black carbon (BC) aerosols near the surface, for a period of 1 year (from January to December 2006) from a network of eight observatories spread over different environments of India, a space-time synthesis is generated. The strong seasonal variations observed, with a winter high and summer low, are attributed to the combined effects of changes in synoptic air mass types, modulated strongly by the atmospheric boundary layer dynamics. Spatial distribution shows much higher BC concentration over the Indo-Gangetic Plain (IGP) than the peninsular Indian stations. These were examined against the simulations using two chemical transport models, GOCART (Goddard Global Ozone Chemistry Aerosol Radiation and Transport) and CHIMERE for the first time over Indian region. Both the model simulations significantly deviated from the measurements at all the stations; more so during the winter and pre-monsoon seasons and over mega cities. However, the CHIMERE model simulations show better agreement compared with the measurements. Notwithstanding this, both the models captured the temporal variations; at seasonal and subseasonal timescales and the natural variabilities (intra-seasonal oscillations) fairly well, especially at the off-equatorial stations. It is hypothesized that an improvement in the atmospheric boundary layer (ABL) parameterization scheme for tropical environment might lead to better results with GOCART.
Resumo:
Orthogonal frequency-division multiple access (OFDMA) systems divide the available bandwidth into orthogonal subchannels and exploit multiuser diversity and frequency selectivity to achieve high spectral efficiencies. However, they require a significant amount of channel state feedback for scheduling and rate adaptation and are sensitive to feedback delays. We develop a comprehensive analysis for OFDMA system throughput in the presence of feedback delays as a function of the feedback scheme, frequency-domain scheduler, and rate adaptation rule. Also derived are expressions for the outage probability, which captures the inability of a subchannel to successfully carry data due to the feedback scheme or feedback delays. Our model encompasses the popular best-n and threshold-based feedback schemes and the greedy, proportional fair, and round-robin schedulers that cover a wide range of throughput versus fairness tradeoff. It helps quantify the different robustness of the schedulers to feedback overhead and delays. Even at low vehicular speeds, it shows that small feedback delays markedly degrade the throughput and increase the outage probability. Further, given the feedback delay, the throughput degradation depends primarily on the feedback overhead and not on the feedback scheme itself. We also show how to optimize the rate adaptation thresholds as a function of feedback delay.
Resumo:
The recently discovered scalar resonance at the Large Hadron Collider is now almost confirmed to be a Higgs boson, whose CP properties are yet to be established. At the International Linear Collider with and without polarized beams, it may be possible to probe these properties at high precision. In this work, we study the possibility of probing departures from the pure CP-even case, by using the decay distributions in the process e(+)e(-) -> t (t) over bar Phi, with Phi mainly decaying into a b (b) over bar pair. We have compared the case of a minimal extension of the Standard Model case (model I) with an additional pseudoscalar degree of freedom, with a more realistic case namely the CP-violating two-Higgs doublet model (model II) that permits a more general description of the couplings. We have considered the International Linear Collider with root s = 800 GeV and integrated luminosity of 300 fb(-1). Our main findings are that even in the case of small departures from the CP-even case, the decay distributions are sensitive to the presence of a CP-odd component in model II, while it is difficult to probe these departures in model I unless the pseudoscalar component is very large. Noting that the proposed degrees of beam polarization increase the statistics, the process demonstrates the effective role of beam polarization in studies beyond the Standard Model. Further, our study shows that an indefinite CP Higgs would be a sensitive laboratory to physics beyond the Standard Model.
Resumo:
Sequential adsorption of CO and NO as well as equimolar NO + CO reaction with variation of temperature over Pd2+ ion-substituted CeO2 and Ce0.75Sn0.25O2 supports has been studied by DRIFTS technique. The results are compared with 2 at.% Pd/Al2O3 containing Pd-0. Both linear and bridging Pd-0-CO bands are observed over 2 at.% Pd/Al2O3. But, band positions are shifted to higher frequencies in Ce0.98Pd0.02O2-delta and Ce0.73Sn0.25Pd0.02O2-delta catalysts that could be associated with Pd delta+-CO species. In contrast, a Pd2+-CO band at 2160 cm(-1) is observed upon CO adsorption over Ce0.98Pd0.02O2-delta and Ce0.73Sn0.25Pd0.02O2-delta catalysts pre-adsorbed with NO and a Pd+-CO band at 2120 cm(-1) is slowly developed on Ce(0.73)Srl(0.25)Pd(0.02)O(2-delta) over time. An intense linear Pd-0-NO band at 1750 cm(-1) found upon NO exposure to CO pre-adsorbed 2 at.% Pd/Al2O3 indicates molecular adsorption of NO. On the other hand, a weak Pd2+-NO band at 1850 cm(-1) is noticed after NO exposure to Ce0.98Pd0.02O2-delta catalyst pre-adsorbed with CO indicating dissociative adsorption of NO which is crucial for NO reduction. Pd-0-NO band is initially formed over CO pre-adsorbed Ce0.73Sn0.25Pd0.02O2-delta which is red-shifted over time along with formation of Pd2+-NO band. Several intense bands related to nitrates and nitrites are observed after exposure of NO to fresh as well as CO pre-adsorbed Ce0.98Pd0.02O2-delta and Ce0.73Sn0.25Pd0.02O2-delta catalysts. Ramping the temperature in a DRIFTS cell upon NO and CO adsorption shows the formation of N2O and NCO surface species, and N2O-formation temperature is comparable with the reaction done in a reactor.
Resumo:
Daily rainfall datasets of 10 years (1998-2007) of Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) version 6 and India Meteorological Department (IMD) gridded rain gauge have been compared over the Indian landmass, both in large and small spatial scales. On the larger spatial scale, the pattern correlation between the two datasets on daily scales during individual years of the study period is ranging from 0.4 to 0.7. The correlation improved significantly (similar to 0.9) when the study was confined to specific wet and dry spells each of about 5-8 days. Wavelet analysis of intraseasonal oscillations (ISO) of the southwest monsoon rainfall show the percentage contribution of the major two modes (30-50 days and 10-20 days), to be ranging respectively between similar to 30-40% and 5-10% for the various years. Analysis of inter-annual variability shows the satellite data to be underestimating seasonal rainfall by similar to 110 mm during southwest monsoon and overestimating by similar to 150 mm during northeast monsoon season. At high spatio-temporal scales, viz., 1 degrees x1 degrees grid, TMPA data do not correspond to ground truth. We have proposed here a new analysis procedure to assess the minimum spatial scale at which the two datasets are compatible with each other. This has been done by studying the contribution to total seasonal rainfall from different rainfall rate windows (at 1 mm intervals) on different spatial scales (at daily time scale). The compatibility spatial scale is seen to be beyond 5 degrees x5 degrees average spatial scale over the Indian landmass. This will help to decide the usability of TMPA products, if averaged at appropriate spatial scales, for specific process studies, e.g., cloud scale, meso scale or synoptic scale.
Resumo:
Boundary layers are subject to favorable and adverse pressure gradients because of both the temporal and spatial components of the pressure gradient. The adverse pressure gradient may cause the flow to separate. In a closed loop unsteady tunnel we have studied the initiation of separation in unsteady flow past a constriction (bluff body) in a channel. We have proposed two important scalings for the time when boundary layer separates. One is based on the local pressure gradient and the other is a convective time scale based on boundary layer parameters. The flow visualization using a dye injection technique shows the flow structure past the body. Nondimensional shedding frequency (Strouhal number) is calculated based on boundary layer and momentum thicknesses. Strouhal number based on the momentum thickness shows a close agreement with that for flat plate and circular cylinder.