45 resultados para Networks on chip (NoC)
Resumo:
This paper investigates the random channel access mechanism specified in the IEEE 802.16 standard for the uplink traffic in a Point-to-MultiPoint (PMP) network architecture. An analytical model is proposed to study the impacts of the channel access parameters, bandwidth configuration and piggyback policy on the performance. The impacts of physical burst profile and non-saturated network traffic are also taken into account in the model. Simulations validate the proposed analytical model. It is observed that the bandwidth utilization can be improved if the bandwidth for random channel access can be properly configured according to the channel access parameters, piggyback policy and network traffic.
Resumo:
The objective of this paper is to combine the antenna downtilt selection with the cell size selection in order to reduce the overall radio frequency (RF) transmission power in the homogeneous High-Speed Packet Downlink (HSDPA) cellular radio access network (RAN). The analysis is based on the concept of small cells deployment. The energy consumption ratio (ECR) and the energy reduction gain (ERG) of the cellular RAN are calculated for different antenna tilts when the cell size is being reduced for a given user density and service area. The results have shown that a suitable antenna tilt and the RF power setting can achieve an overall energy reduction of up to 82.56%. Equally, our results demonstrate that a small cell deployment can considerably reduce the overall energy consumption of a cellular network.
Resumo:
This paper attempts to address the effectiveness of physical-layer network coding (PNC) on the capacity improvement for multi-hop multicast in random wireless ad hoc networks (WAHNs). While it can be shown that there is a capacity gain by PNC, we can prove that the per session throughput capacity with PNC is ? (nR(n))), where n is the total number of nodes, R(n) is the communication range, and each multicast session consists of a constant number of sinks. The result implies that PNC cannot improve the capacity order of multicast in random WAHNs, which is different from the intuition that PNC may improve the capacity order as it allows simultaneous signal reception and combination. Copyright © 2010 ACM.
Resumo:
Academia has followed the interest by companies in establishing industrial networks by studying aspects such as social interaction and contractual relationships. But what patterns underlie the emergence of industrial networks and what support should research provide for practitioners? Firstly, it seems that manufacturing is becoming a commodity rather than a unique capability, which accounts especially for low-technology approaches in downstream parts of the network, for example in assembly operations. Secondly, the increased tendency to specialize forces other parts of industrial networks to introduce advanced manufacturing technologies for niche markets. Thirdly, the capital market for investments in capacity and the trade in manufacturing as a commodity dominates resource allocation to a larger extent. Fourthly, there will be a continuous move toward more loosely connected entities forming manufacturing networks. More traditional concepts, like keiretsu and chaibol networks, do not sufficiently support this transition. Research should address these fundamental challenges to prepare for the industrial networks of 2020 and beyond.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
Despite recent research on time (e.g. Hedaa & Törnroos, 2001), consideration of the time dimension in data collection, analysis and interpretation in research in supply networks is, to date, still limited. Drawing on a body of literature from organization studies, and empirical findings from a six-year action research programme and a related study of network learning, we reflect on time, timing and timeliness in interorganizational networks. The empirical setting is supply networks in the English health sector wherein we identify and elaborate various issues of time, within the case and in terms of research process. Our analysis is wide-ranging and multi-level, from the global (e.g. identifying the notion of life cycles) to the particular (e.g. different cycle times in supply, such as daily for deliveries and yearly for contracts). We discuss the ‘speeding up’ of inter-organizational ‘e’ time and tensions with other time demands. In closing the paper, we relate our conclusions to the future conduct of the research programme and supply research more generally, and to the practice of managing supply (in) networks.
Resumo:
Energy consumption has been a key concern of data gathering in wireless sensor networks. Previous research works show that modulation scaling is an efficient technique to reduce energy consumption. However, such technique will also impact on both packet delivery latency and packet loss, therefore, may result in adverse effects on the qualities of applications. In this paper, we study the problem of modulation scaling and energy-optimization. A mathematical model is proposed to analyze the impact of modulation scaling on the overall energy consumption, end-to-end mean delivery latency and mean packet loss rate. A centralized optimal management mechanism is developed based on the model, which adaptively adjusts the modulation levels to minimize energy consumption while ensuring the QoS for data gathering. Experimental results show that the management mechanism saves significant energy in all the investigated scenarios. Some valuable results are also observed in the experiments. © 2004 IEEE.
Resumo:
Editorial
Resumo:
Many innovations are inspired by past ideas in a nontrivial way. Tracing these origins and identifying scientific branches is crucial for research inspirations. In this paper, we use citation relations to identify the descendant chart, i.e., the family tree of research papers. Unlike other spanning trees that focus on cost or distance minimization, we make use of the nature of citations and identify the most important parent for each publication, leading to a treelike backbone of the citation network. Measures are introduced to validate the backbone as the descendant chart. We show that citation backbones can well characterize the hierarchical and fractal structure of scientific development, and lead to an accurate classification of fields and subfields. © 2011 American Physical Society.
Resumo:
This paper proposes the use of the 2-D differential decoding to improve the robustness of dual-polarization optical packet receivers and is demonstrated in a wavelength switching scenario for the first time.
Resumo:
Erasure control coding has been exploited in communication networks with an aim to improve the end-to-end performance of data delivery across the network. To address the concerns over the strengths and constraints of erasure coding schemes in this application, we examine the performance limits of two erasure control coding strategies, forward erasure recovery and adaptive erasure recovery. Our investigation shows that the throughput of a network using an (n, k) forward erasure control code is capped by r =k/n when the packet loss rate p ≤ (te/n) and by k(l-p)/(n-te) when p > (t e/n), where te is the erasure control capability of the code. It also shows that the lower bound of the residual loss rate of such a network is (np-te)/(n-te) for (te/n) < p ≤ 1. Especially, if the code used is maximum distance separable, the Shannon capacity of the erasure channel, i.e. 1-p, can be achieved and the residual loss rate is lower bounded by (p+r-1)/r, for (1-r) < p ≤ 1. To address the requirements in real-time applications, we also investigate the service completion time of different schemes. It is revealed that the latency of the forward erasure recovery scheme is fractionally higher than that of the scheme without erasure control coding or retransmission mechanisms (using UDP), but much lower than that of the adaptive erasure scheme when the packet loss rate is high. Results on comparisons between the two erasure control schemes exhibit their advantages as well as disadvantages in the role of delivering end-to-end services. To show the impact of the bounds derived on the end-to-end performance of a TCP/IP network, a case study is provided to demonstrate how erasure control coding could be used to maximize the performance of practical systems. © 2010 IEEE.