242 resultados para TDD-LTE
Resumo:
Single-carrier frequency division multiple access (SC-FDMA) has become a popular alternative to orthogonal frequency division multiple access (OFDMA) in multiuser communication on the uplink. This is mainly due to the low peak-to-average power ratio (PAPR) of SC-FDMA compared to that of OFDMA. Long-term evolution (LTE) uses SC-FDMA on the uplink to exploit this PAPR advantage to reduce transmit power amplifier backoff in user terminals. In this paper, we show that SC-FDMA can be beneficially used for multiuser communication on the downlink as well. We present SC-FDMA transmit and receive signaling architectures for multiuser communication on the downlink. The benefits of using SC-FDMA on the downlink are that SC-FDMA can achieve i) significantly better bit error rate (BER) performance at the user terminal compared to OFDMA, and ii) improved PAPR compared to OFDMA which reduces base station (BS) power amplifier backoff (making BSs more green). SC-FDMA receiver needs to do joint equalization, which can be carried out using low complexity equalization techniques. For this, we present a local neighborhood search based equalization algorithm for SC-FDMA. This algorithm is very attractive both in complexity as well as performance. We present simulation results that establish the PAPR and BER performance advantage of SC-FDMA over OFDMA in multiuser SISO/MIMO downlink as well as in large-scale multiuser MISO downlink with tens to hundreds of antennas at the BS.
Resumo:
Transmit antenna selection (AS) has been adopted in contemporary wideband wireless standards such as Long Term Evolution (LTE). We analyze a comprehensive new model for AS that captures several key features about its operation in wideband orthogonal frequency division multiple access (OFDMA) systems. These include the use of channel-aware frequency-domain scheduling (FDS) in conjunction with AS, the hardware constraint that a user must transmit using the same antenna over all its assigned subcarriers, and the scheduling constraint that the subcarriers assigned to a user must be contiguous. The model also captures the novel dual pilot training scheme that is used in LTE, in which a coarse system bandwidth-wide sounding reference signal is used to acquire relatively noisy channel state information (CSI) for AS and FDS, and a dense narrow-band demodulation reference signal is used to acquire accurate CSI for data demodulation. We analyze the symbol error probability when AS is done in conjunction with the channel-unaware, but fair, round-robin scheduling and with channel-aware greedy FDS. Our results quantify how effective joint AS-FDS is in dispersive environments, the interactions between the above features, and the ability of the user to lower SRS power with minimal performance degradation.
Resumo:
This paper considers the problem of channel estimation at the transmitter in a spatial multiplexing-based Time Division Duplex (TDD) Multiple Input Multiple Output (MIMO) system with perfect CSIR. A novel channel-dependent Reverse Channel Training (RCT) sequence is proposed, using which the transmitter estimates the beamforming vectors for forward link data transmission. This training sequence is designed based on the following two metrics: (i) a capacity lower bound, and (ii) the mean square error in the estimate. The performance of the proposed training scheme is analyzed and is shown to significantly outperform the conventional orthogonal RCT sequence. Also, in the case where the transmitter uses water-filling power allocation for data transmission, a novel RCT sequence is proposed and optimized with respect to the MSE in estimating the transmit covariance matrix.
Resumo:
In contemporary wideband orthogonal frequency division multiplexing (OFDM) systems, such as Long Term Evolution (LTE) and WiMAX, different subcarriers over which a codeword is transmitted may experience different signal-to-noise-ratios (SNRs). Thus, adaptive modulation and coding (AMC) in these systems is driven by a vector of subcarrier SNRs experienced by the codeword, and is more involved. Exponential effective SNR mapping (EESM) simplifies the problem by mapping this vector into a single equivalent fiat-fading SNR. Analysis of AMC using EESM is challenging owing to its non-linear nature and its dependence on the modulation and coding scheme. We first propose a novel statistical model for the EESM, which is based on the Beta distribution. It is motivated by the central limit approximation for random variables with a finite support. It is simpler and as accurate as the more involved ad hoc models proposed earlier. Using it, we develop novel expressions for the throughput of a point-to-point OFDM link with multi-antenna diversity that uses EESM for AMC. We then analyze a general, multi-cell OFDM deployment with co-channel interference for various frequency-domain schedulers. Extensive results based on LTE and WiMAX are presented to verify the model and analysis, and gain new insights.
Resumo:
In contemporary orthogonal frequency division multiplexing (OFDM) systems, such as Long Term Evolution (LTE), LTE-Advanced, and WiMAX, a codeword is transmitted over a group of subcarriers. Since different subcarriers see different channel gains in frequency-selective channels, the modulation and coding scheme (MCS) of the codeword must be selected based on the vector of signal-to-noise-ratios (SNRs) of these subcarriers. Exponential effective SNR mapping (EESM) maps the vector of SNRs into an equivalent flat-fading SNR, and is widely used to simplify this problem. We develop a new analytical framework to characterize the throughput of EESM-based rate adaptation in such wideband channels in the presence of feedback delays. We derive a novel accurate approximation for the throughput as a function of feedback delay. We also propose a novel bivariate gamma distribution to model the time evolution of EESM between the times of estimation and data transmission, which facilitates the analysis. These are then generalized to a multi-cell, multi-user scenario with various frequency-domain schedulers. Unlike prior work, most of which is simulation-based, our framework encompasses both correlated and independent subcarriers and various multiple antenna diversity modes; it is accurate over a wide range of delays.
Resumo:
Contemporary cellular standards, such as Long Term Evolution (LTE) and LTE-Advanced, employ orthogonal frequency-division multiplexing (OFDM) and use frequency-domain scheduling and rate adaptation. In conjunction with feedback reduction schemes, high downlink spectral efficiencies are achieved while limiting the uplink feedback overhead. One such important scheme that has been adopted by these standards is best-m feedback, in which every user feeds back its m largest subchannel (SC) power gains and their corresponding indices. We analyze the single cell average throughput of an OFDM system with uniformly correlated SC gains that employs best-m feedback and discrete rate adaptation. Our model incorporates three schedulers that cover a wide range of the throughput versus fairness tradeoff and feedback delay. We show that, for small m, correlation significantly reduces average throughput with best-m feedback. This result is pertinent as even in typical dispersive channels, correlation is high. We observe that the schedulers exhibit varied sensitivities to correlation and feedback delay. The analysis also leads to insightful expressions for the average throughput in the asymptotic regime of a large number of users.
Resumo:
Practical orthogonal frequency division multiplexing (OFDM) systems, such as Long Term Evolution (LTE), exploit multi-user diversity using very limited feedback. The best-m feedback scheme is one such limited feedback scheme, in which users report only the gains of their m best subchannels (SCs) and their indices. While the scheme has been extensively studied and adopted in standards such as LTE, an analysis of its throughput for the practically important case in which the SCs are correlated has received less attention. We derive new closed-form expressions for the throughput when the SC gains of a user are uniformly correlated. We analyze the performance of the greedy but unfair frequency-domain scheduler and the fair round-robin scheduler for the general case in which the users see statistically non-identical SCs. An asymptotic analysis is then developed to gain further insights. The analysis and extensive numerical results bring out how correlation reduces throughput.
Resumo:
Abstract. A low power arcjet-thruster of 1 kW-class with gas mixture of H2-N2 or pure argon as the propellant is fired at a chamber pressure about 10 Pa. The nozzle temperature is detected with an infrared pyrometer; a plate set perpendicular to the plume axis and connected to a force sensor is used to measure the thrust; a probe with a tapered head is used for measuring the impact pressure in the plume flow; and a double-electrostatic probe system is applied to evaluate the electron temperature. Results indicate that the high nozzle temperature could adversely affect the conversion from enthalpy to kinetic energy. The plume flow deviates evidently from the LTE condition, and the rarefied-gas dynamic effect should be considered under the high temperature and low-pressure condition in analyzing the experimental phenomena.
Resumo:
[ES]El objetivo de este proyecto ha sido desarrollar una herramienta software que permita medir el rendimiento de redes con tecnología móvil 4G, también conocida como LTE. Para ello se ha creado un sistema software que está compuesto por una aplicación móvil y un servidor de aplicaciones. El sistema en conjunto realiza la función de recoger indicadores de calidad de la red móvil de diversa índole, que posteriormente son procesados utilizando herramientas software matemáticas, para así obtener gráficas y mapas que permiten analizar la situación y el rendimiento de una red 4G concreta. El desarrollo del software ha llegado a nivel de prototipo y se han realizado pruebas reales con él obteniendo resultados positivos de funcionamiento.
Resumo:
[ES]Este documento describe un proyecto que comprende un análisis sobre la calidad y rendimiento de las redes 4G en la actualidad. El análisis se realiza atendiendo a diferentes tipos de parámetros, como son indicadores del nivel físico, que son facilitados por la propia red o estaciones base, o medidas de la calidad percibida por el usuario. Para llevar a cabo este estudio se define previamente una serie de procederes para realizar medidas de los parámetros a estudiar en diferentes zonas de Bilbao. Una vez obtenidas las medidas comienza el post-procesado en el que se obtiene los resultados y conclusiones del proyecto.
Resumo:
One of the most challenging problems in mobile broadband networks is how to assign the available radio resources among the different mobile users. Traditionally, research proposals are either speci c to some type of traffic or deal with computationally intensive algorithms aimed at optimizing the delivery of general purpose traffic. Consequently, commercial networks do not incorporate these mechanisms due to the limited hardware resources at the mobile edge. Emerging 5G architectures introduce cloud computing principles to add flexible computational resources to Radio Access Networks. This paper makes use of the Mobile Edge Computing concepts to introduce a new element, denoted as Mobile Edge Scheduler, aimed at minimizing the mean delay of general traffic flows in the LTE downlink. This element runs close to the eNodeB element and implements a novel flow-aware and channel-aware scheduling policy in order to accommodate the transmissions to the available channel quality of end users.
Resumo:
Nowadays, train control in-lab simulation tools play a crucial role in reducing extensive and expensive on-site railway testing activities. In this paper, we present our contribution in this arena by detailing the internals of our European Railway Train Management System in-lab demonstrator. This demonstrator is built over a general-purpose simulation framework, Riverbed Modeler, previously Opnet Modeler. Our framework models both ERTMS subsystems, the Automatic Train Protection application layer based on movement authority message exchange and the telecommunication subsystem based on GSM-R communication technology. We provide detailed information on our modelling strategy. We also validate our simulation framework with real trace data. To conclude, under current industry migration scenario from GSM-R legacy obsolescence to IP-based heterogeneous technologies, our simulation framework represents a singular tool to railway operators. As an example, we present the assessment of related performance indicators for a specific railway network using a candidate replacement technology, LTE, versus current legacy technology. To the best of our knowledge, there is no similar initiative able to measure the impact of the telecommunication subsystem in the railway network availability.
Resumo:
206 p.
Resumo:
The surge of the Internet traffic with exabytes of data flowing over operators mobile networks has created the need to rethink the paradigms behind the design of the mobile network architecture. The inadequacy of the 4G UMTS Long term Evolution (LTE) and even of its advanced version LTE-A is evident, considering that the traffic will be extremely heterogeneous in the near future and ranging from 4K resolution TV to machine-type communications. To keep up with these changes, academia, industries and EU institutions have now engaged in the quest for new 5G technology. In this paper we present the innovative system design, concepts and visions developed by the 5G PPP H2020 project SESAME (Small cEllS coordinAtion for Multi-tenancy and Edge services). The innovation of SESAME is manifold: i) combine the key 5G small cells with cloud technology, ii) promote and develop the concept of Small Cellsas- a-Service (SCaaS), iii) bring computing and storage power at the mobile network edge through the development of nonx86 ARM technology enabled micro-servers, and iv) address a large number of scenarios and use cases applying mobile edge computing. Topics:
Resumo:
现代软件开发项目规模的日益增大和复杂度的日益提高要求软件组织采用更有效的软件开发方法。学术界和工业界提出了一系列的软件工程方法,其主要目的是提高软件产品质量,保障项目进度,降低项目成本,减少维护费用。测试驱动开发(Test-Driven Development TDD),作为敏捷开发中一种非常流行的方法,经过最近十来年的发展,无论在工业界还是在学术界均有大量成功应用案例;同时测试驱动开发的思想也已经日益为越来越多的软件开发组织和开发者所接受。 尽管测试驱动开发(TDD)可以提高软件产品的质量和软件开发人员的生产率,但是其实施难度令众多软件开发组织对于测试驱动开发方法望而却步。同时在实际项目中由于进度压力,严格的测试驱动开发往往不能自始至中贯彻执行,而且在满足软件产品质量要求的条件下,我们没有必要为了达到零缺陷而对每个开发模块进行测试驱动开发。因此我们有必要对于软件组织是否采用TDD方式开发,对于哪些模块采用TDD方式开发进行有效合理的评价,以便为软件项目管理人员提供决策依据。软件过程仿真,无疑是一种低成本,且较为科学的方法,它能够在已有信息的基础上,提供科学的决策依据。 本文以测试驱动开发大量的经验研究结论为依据,提出一种基于过程模型随机仿真的TDD模块选取方法。该方法以随机进程代数为建模工具,通过用例度量软件模块的复杂性,来获取模型的仿真参数,进行仿真并得到该模型的仿真结果。最终采用TDD模块选取算法来分析仿真结果,得出最佳TDD实施策略,可以为项目经理提供合理的TDD实施策略。本文主要研究内容包括 第一、提出一种度量软件模块复杂性的简易计算方法,该方法从软件模块的内部复杂度和外部复杂度出发,引入结构熵的概念;用结构熵度量软件外部复杂度,用例中的事件流来度量内部复杂度,最终得到模块复杂度和该模块相应的仿真参数。 第二、对于测试驱动开发过程和传统的软件开发过程,建立随机进程代数仿真模型。经过比较多种仿真方法后,选取Gibson-Bruck随机仿真方法对于软件过程进行仿真。并分析了采用该算法的合理性。 第三、提出一种基于过程模型随机仿真的TDD实施模块选取算法,从仿真结果出发,为项目经理提供合理的决策支持。同时为了便于本方法的应用,设计实现了基于过程模型随机仿真的TDD模块选取系统。