242 resultados para TDD-LTE
Resumo:
The ever increasing demand for new services from users who want high-quality broadband services while on the move, is straining the efficiency of current spectrum allocation paradigms, leading to an overall feeling of spectrum scarcity. In order to circumvent this problem, two possible solutions are being investigated: (i) implementing new technologies capable of accessing the temporarily/locally unused bands, without interfering with the licensed services, like Cognitive Radios; (ii) release some spectrum bands thanks to new services providing higher spectral efficiency, e.g., DVB-T, and allocate them to new wireless systems. These two approaches are promising, but also pose novel coexistence and interference management challenges to deal with. In particular, the deployment of devices such as Cognitive Radio, characterized by the inherent unplanned, irregular and random locations of the network nodes, require advanced mathematical techniques in order to explicitly model their spatial distribution. In such context, the system performance and optimization are strongly dependent on this spatial configuration. On the other hand, allocating some released spectrum bands to other wireless services poses severe coexistence issues with all the pre-existing services on the same or adjacent spectrum bands. In this thesis, these methodologies for better spectrum usage are investigated. In particular, using Stochastic Geometry theory, a novel mathematical framework is introduced for cognitive networks, providing a closed-form expression for coverage probability and a single-integral form for average downlink rate and Average Symbol Error Probability. Then, focusing on more regulatory aspects, interference challenges between DVB-T and LTE systems are analysed proposing a versatile methodology for their proper coexistence. Moreover, the studies performed inside the CEPT SE43 working group on the amount of spectrum potentially available to Cognitive Radios and an analysis of the Hidden Node problem are provided. Finally, a study on the extension of cognitive technologies to Hybrid Satellite Terrestrial Systems is proposed.
Resumo:
This doctoral dissertation aims to establish fiber-optic technologies overcoming the limiting issues of data communications in indoor environments. Specific applications are broadband mobile distribution in different in-building scenarios and high-speed digital transmission over short-range wired optical systems. Two key enabling technologies are considered: Radio over Fiber (RoF) techniques over standard silica fibers for distributed antenna systems (DAS) and plastic optical fibers (POFs) for short-range communications. Hence, the objectives and achievements of this thesis are related to the application of RoF and POF technologies in different in-building scenarios. On one hand, a theoretical and experimental analysis combined with demonstration activities has been performed on cost-effective RoF systems. An extensive modeling on modal noise impact both on linear and non-linear characteristics of RoF link over silica multimode fiber has been performed to achieve link design rules for an optimum choice of the transmitter, receiver and launching technique. A successful transmission of Long Term Evolution (LTE) mobile signals on the resulting optimized RoF system over silica multimode fiber employing a Fabry-Perot LD, central launch technique and a photodiode with a built-in ball lens was demonstrated up to 525m with performances well compliant with standard requirements. On the other hand, digital signal processing techniques to overcome the bandwidth limitation of POF have been investigated. An uncoded net bit-rate of 5.15Gbit/s was obtained on a 50m long POF link employing an eye-safe transmitter, a silicon photodiode, and DMT modulation with bit and power loading algorithm. With the insertion of 3x2N quadrature amplitude modulation constellation formats, an uncoded net-bit-rate of 5.4Gbit/s was obtained on a 50 m long POF link employing an eye-safe transmitter and a silicon avalanche photodiode. Moreover, simultaneous transmission of baseband 2Gbit/s with DMT and 200Mbit/s with an ultra-wideband radio signal has been validated over a 50m long POF link.
Resumo:
Das Gebiet der drahtlosen Kommunikationsanwendungen befindet sich in einem permanenten Entwicklungsprozess (Mobilfunkstandards: GSM/UMTS/LTE/5G, glo-bale Navigationssatellitensysteme (GNSS): GPS, GLONASS, Galileo, Beidou) zu immer höheren Datenraten und zunehmender Miniaturisierung, woraus ein hoher Bedarf für neue, optimierte Hochfrequenzmaterialien resultiert. Diese Entwicklung zeigt sich besonders in den letzten Jahren in der zunehmenden Entwicklung und Anzahl von Smartphones, welche verschiedene Technologien mit unterschiedlichen Arbeitsfrequenzen innerhalb eines Geräts kombinieren (data: 1G-4G, GPS, WLAN, Bluetooth). Die für zukünftige Technologien (z.B. 5G) benötigte Performance-steigerung kann durch die Verwendung von auf MIMO basierenden Antennensystemen realisiert werden (multiple-input & multiple-output, gesteuerte Kombination von mehreren Antennen) für welche auf dielectric Loading basierende Technologien als eine der vielversprechendsten Implementierungslösungen angesehen werden. rnDas Ziel dieser Arbeit war die Entwicklung einer geeigneten paraelektrischen Glaskeramik ($varepsilon_{r}$ > 20, $Qf$ > 5000 GHz, |$tau_f$| < 20 ppm/K; im GHz Frequenzbe-reich) im $mathrm{La_{2}O_{3}}$-$mathrm{TiO_{2}}$-$mathrm{SiO_{2}}$-$mathrm{B_{2}O_{3}}$-System für auf dielectric Loading basierende Mobilfunkkommunikationstechnologien als Alternative zu existierenden kommerziell genutzten Sinterkeramiken. Der Fokus lag hierbei auf der Frage, wie die makroskopi-schen dielektrischen Eigenschaften der Glaskeramik mit ihrer Mikrostruktur korreliert bzw. modifiziert werden können. Es konnte gezeigt werden, dass die dielektrischen Materialanforderungen durch das untersuchte System erfüllt werden und dass auf Glaskeramik basierende Dielektrika weitere vorteilhafte nichtelektro-nische Eigenschaften gegenüber gesinterten Keramiken besitzen, womit dielektrische Glaskeramiken durchaus als geeignete Alternative angesehen werden können. rnEin stabiles Grünglas mit minimalen Glasbildneranteil wurde entwickelt und die chemische Zusammensetzung bezüglich Entglasung und Redoxinstabilitäten optimiert. Geeignete Dotierungen für dielektrisch verlustarme $mathrm{TiO_{2}}$-haltige Glaskeramiken wurden identifiziert.rnDer Einfluss der Schmelzbedingungen auf die Keimbildung wurde untersucht und der Keramisierungsprozess auf einen maximalen Anteil der gewünschten Kristallphasen optimiert um optimale dielektrische Eigenschaften zu erhalten. Die mikroskopische Struktur der Glaskeramiken wurde analysiert und ihr Einfluss auf die makroskopischen dielektrischen Eigenschaften bestimmt. Die Hochfrequenzverlustmechanismen wurden untersucht und Antennen-Prototypenserien wurden analysiert um die Eignung von auf Glaskeramik basierenden Dielektrika für die Verwendung in dielectric Loading Anwendungen zu zeigen.
Resumo:
Negli ultimi anni, la popolazione è stata esposta a vari tipi di campi elettromagnetici generati da strumentazioni elettroniche e dispositivi di telecomunicazione. In questa tesi, si valutano SAR ed effetti termici prodotti da tre antenne patch a radiofrequenza sia su Cheratinociti (cellule dell'epidermide) in vitro che sull'epidermide umana in vivo caratterizzata in un modello multistrato contenente tessuti biologici. Le antenne progettate hanno frequenze di risonanza di 1.8 e 2.4 GHz, tipiche delle bande utilizzate rispettivamente da LTE (Long Term Evolution, la più recente evoluzione degli standard di telefonia mobile cellulare) e dalle moderne tecnologie Wi-Fi, e di 60 GHz, propria delle cosiddette onde millimetriche. Vengono valutati quindi il SAR (Specific Absorption Rate, grandezza che fornisce una misura dell'assorbimento delle onde da parte dei tessuti biologici) e le variazioni di temperatura prodotte dall'applicazione del campo elettromagnetico: ciò viene realizzato attraverso l'equazione del calore stazionaria e, nel caso dell'epidermide in vivo, con la Bioheat Equation, che contempla anche la circolazione sanguigna ed il calore generato nei processi metabolici che avvengono nell'organismo.
Resumo:
The evolution of the Next Generation Networks, especially the wireless broadband access technologies such as Long Term Evolution (LTE) and Worldwide Interoperability for Microwave Access (WiMAX), have increased the number of "all-IP" networks across the world. The enhanced capabilities of these access networks has spearheaded the cloud computing paradigm, where the end-users aim at having the services accessible anytime and anywhere. The services availability is also related with the end-user device, where one of the major constraints is the battery lifetime. Therefore, it is necessary to assess and minimize the energy consumed by the end-user devices, given its significance for the user perceived quality of the cloud computing services. In this paper, an empirical methodology to measure network interfaces energy consumption is proposed. By employing this methodology, an experimental evaluation of energy consumption in three different cloud computing access scenarios (including WiMAX) were performed. The empirical results obtained show the impact of accurate network interface states management and application network level design in the energy consumption. Additionally, the achieved outcomes can be used in further software-based models to optimized energy consumption, and increase the Quality of Experience (QoE) perceived by the end-users.
Resumo:
MIPAS observations of temperature, water vapor, and ozone in October 2009 as derived with the scientific level-2 processor run by Karlsruhe Institute of Technology (KIT), Institute for Meteorology and Climate Research (IMK) and CSIC, Instituto de Astrofísica de Andalucía (IAA) and retrieved from version 4.67 level-1b data have been compared to co-located field campaign observations obtained during the MOHAVE-2009 campaign at the Table Mountain Facility near Pasadena, California in October 2009. The MIPAS measurements were validated regarding any potential biases of the profiles, and with respect to their precision estimates. The MOHAVE-2009 measurement campaign provided measurements of atmospheric profiles of temperature, water vapor/relative humidity, and ozone from the ground to the mesosphere by a suite of instruments including radiosondes, ozonesondes, frost point hygrometers, lidars, microwave radiometers and Fourier transform infra-red (FTIR) spectrometers. For MIPAS temperatures (version V4O_T_204), no significant bias was detected in the middle stratosphere; between 22 km and the tropopause MIPAS temperatures were found to be biased low by up to 2 K, while below the tropopause, they were found to be too high by the same amount. These findings confirm earlier comparisons of MIPAS temperatures to ECMWF data which revealed similar differences. Above 12 km up to 45 km, MIPAS water vapor (version V4O_H2O_203) is well within 10% of the data of all correlative instruments. The well-known dry bias of MIPAS water vapor above 50 km due to neglect of non-LTE effects in the current retrievals has been confirmed. Some instruments indicate that MIPAS water vapor might be biased high by 20 to 40% around 10 km (or 5 km below the tropopause), but a consistent picture from all comparisons could not be derived. MIPAS ozone (version V4O_O3_202) has a high bias of up to +0.9 ppmv around 37 km which is due to a non-identified continuum like radiance contribution. No further significant biases have been detected. Cross-comparison to co-located observations of other satellite instruments (Aura/MLS, ACE-FTS, AIRS) is provided as well.
Resumo:
Bluetooth wireless technology is a robust short-range communications system designed for low power (10 meter range) and low cost. It operates in the 2.4 GHz Industrial Scientific Medical (ISM) band and it employs two techniques for minimizing interference: a frequency hopping scheme which nominally splits the 2.400 - 2.485 GHz band in 79 frequency channels and a time division duplex (TDD) scheme which is used to switch to a new frequency channel on 625 μs boundaries. During normal operation a Bluetooth device will be active on a different frequency channel every 625 μs, thus minimizing the chances of continuous interference impacting the performance of the system. The smallest unit of a Bluetooth network is called a piconet, and can have a maximum of eight nodes. Bluetooth devices must assume one of two roles within a piconet, master or slave, where the master governs quality of service and the frequency hopping schedule within the piconet and the slave follows the master’s schedule. A piconet must have a single master and up to 7 active slaves. By allowing devices to have roles in multiple piconets through time multiplexing, i.e. slave/slave or master/slave, the Bluetooth technology allows for interconnecting multiple piconets into larger networks called scatternets. The Bluetooth technology is explored in the context of enabling ad-hoc networks. The Bluetooth specification provides flexibility in the scatternet formation protocol, outlining only the mechanisms necessary for future protocol implementations. A new protocol for scatternet formation and maintenance - mscat - is presented and its performance is evaluated using a Bluetooth simulator. The free variables manipulated in this study include device activity and the probabilities of devices performing discovery procedures. The relationship between the role a device has in the scatternet and it’s probability of performing discovery was examined and related to the scatternet topology formed. The results show that mscat creates dense network topologies for networks of 30, 50 and 70 nodes. The mscat protocol results in approximately a 33% increase in slaves/piconet and a reduction of approximately 12.5% of average roles/node. For 50 node scenarios the set of parameters which creates the best determined outcome is unconnected node inquiry probability (UP) = 10%, master node inquiry probability (MP) = 80% and slave inquiry probability (SP) = 40%. The mscat protocol extends the Bluetooth specification for formation and maintenance of scatternets in an ad-hoc network.
Resumo:
The spectacular images of Comet 103P/Hartley 2 recorded by the Medium Resolution Instrument (MRI) and High Resolution Instrument (HRI) on board of the Extrasolar Planet Observation and Deep Impact Extended Investigation (EPOXI) spacecraft, as the Deep Impact extended mission, revealed that its bi-lobed very active nucleus outgasses volatiles heterogeneously. Indeed, CO2 is the primary driver of activity by dragging out chunks of pure ice out of the nucleus from the sub-solar lobe that appear to be the main source of water in Hartley 2's coma by sublimating slowly as they go away from the nucleus. However, water vapor is released by direct sublimation of the nucleus at the waist without any significant amount of either CO2 or icy grains. The coma structure for a comet with such areas of diverse chemistry differs from the usual models where gases are produced in a homogeneous way from the surface. We use the fully kinetic Direct Simulation Monte Carlo model of Tenishev et al. (Tenishev, V.M., Combi, M.R., Davidsson, B. [2008]. Astrophys. J. 685, 659-677; Tenishev, V.M., Combi, M.R., Rubin, M. [2011]. Astrophys. J. 732, 104-120) applied to Comet 103P/Hartley 2 including sublimating icy grains to reproduce the observations made by EPOXI and ground-based measurements. A realistic bi-lobed nucleus with a succession of active areas with different chemistry was included in the model enabling us to study in details the coma of Hartley 2. The different gas production rates from each area were found by fitting the spectra computed using a line-by-line non-LTE radiative transfer model to the HRI observations. The presence of icy grains with long lifetimes, which are pushed anti-sunward by radiation pressure, explains the observed OH asymmetry with enhancement on the night side of the coma.
Resumo:
Virtualisation of cellular networks can be seen as a way to significantly reduce the complexity of processes, required nowadays to provide reliable cellular networks. The Future Communication Architecture for Mobile Cloud Services: Mobile Cloud Networking (MCN) is a EU FP7 Large-scale Integrating Project (IP) funded by the European Commission that is focusing on cloud computing concepts to achieve virtualisation of cellular networks. It aims at the development of a fully cloud-based mobile communication and application platform, or more specifically, it aims to investigate, implement and evaluate the technological foundations for the mobile communication system of Long Term Evolution (LTE), based on Mobile Network plus Decentralized Computing plus Smart Storage offered as one atomic service: On-Demand, Elastic and Pay-As-You-Go. This paper provides a brief overview of the MCN project and discusses the challenges that need to be solved.
Resumo:
Soft X-ray lasing across a Ni-like plasma gain-medium requires optimum electron temperature and density for attaining to the Ni-like ion stage and for population inversion in the View the MathML source3d94d1(J=0)→3d94p1(J=1) laser transition. Various scaling laws, function of operating parameters, were compared with respect to their predictions for optimum temperatures and densities. It is shown that the widely adopted local thermodynamic equilibrium (LTE) model underestimates the optimum plasma-lasing conditions. On the other hand, non-LTE models, especially when complemented with dielectronic recombination, provided accurate prediction of the optimum plasma-lasing conditions. It is further shown that, for targets with Z equal or greater than the rare-earth elements (e.g. Sm), the optimum electron density for plasma-lasing is not accessible for pump-pulses at View the MathML sourceλ=1ω=1μm. This observation explains a fundamental difficulty in saturating the wavelength of plasma-based X-ray lasers below 6.8 nm, unless using 2ω2ω pumping.
Resumo:
Wireless networks have become more and more popular because of ease of installation, ease of access, and support of smart terminals and gadgets on the move. In the overall life cycle of providing green wireless technology, from production to operation and, finally, removal, this chapter focuses on the operation phase and summarizes insights in energy consumption of major technologies. The chapter also focuses on the edge of the network, comprising network access points (APs) and mobile user devices. It discusses particularities of most important wireless networking technologies: wireless access networks including 3G/LTE and wireless mesh networks (WMNs); wireless sensor networks (WSNs); and ad-hoc and opportunistic networks. Concerning energy efficiency, the chapter discusses challenges in access, wireless sensor, and ad-hoc and opportunistic networks.
Resumo:
Commoditization and virtualization of wireless networks are changing the economics of mobile networks to help network providers (e.g., MNO, MVNO) move from proprietary and bespoke hardware and software platforms toward an open, cost-effective, and flexible cellular ecosystem. In addition, rich and innovative local services can be efficiently created through cloudification by leveraging the existing infrastructure. In this work, we present RANaaS, which is a cloudified radio access network delivered as a service. RANaaS provides the service life-cycle of an ondemand, elastic, and pay as you go 3GPP RAN instantiated on top of the cloud infrastructure. We demonstrate an example of realtime cloudified LTE network deployment using the OpenAirInterface LTE implementation and OpenStack running on commodity hardware as well as the flexibility and performance of the platform developed.
Resumo:
Service providers make use of cost-effective wireless solutions to identify, localize, and possibly track users using their carried MDs to support added services, such as geo-advertisement, security, and management. Indoor and outdoor hotspot areas play a significant role for such services. However, GPS does not work in many of these areas. To solve this problem, service providers leverage available indoor radio technologies, such as WiFi, GSM, and LTE, to identify and localize users. We focus our research on passive services provided by third parties, which are responsible for (i) data acquisition and (ii) processing, and network-based services, where (i) and (ii) are done inside the serving network. For better understanding of parameters that affect indoor localization, we investigate several factors that affect indoor signal propagation for both Bluetooth and WiFi technologies. For GSM-based passive services, we developed first a data acquisition module: a GSM receiver that can overhear GSM uplink messages transmitted by MDs while being invisible. A set of optimizations were made for the receiver components to support wideband capturing of the GSM spectrum while operating in real-time. Processing the wide-spectrum of the GSM is possible using a proposed distributed processing approach over an IP network. Then, to overcome the lack of information about tracked devices’ radio settings, we developed two novel localization algorithms that rely on proximity-based solutions to estimate in real environments devices’ locations. Given the challenging indoor environment on radio signals, such as NLOS reception and multipath propagation, we developed an original algorithm to detect and remove contaminated radio signals before being fed to the localization algorithm. To improve the localization algorithm, we extended our work with a hybrid based approach that uses both WiFi and GSM interfaces to localize users. For network-based services, we used a software implementation of a LTE base station to develop our algorithms, which characterize the indoor environment before applying the localization algorithm. Experiments were conducted without any special hardware, any prior knowledge of the indoor layout or any offline calibration of the system.
Resumo:
Service providers make use of cost-effective wireless solutions to identify, localize, and possibly track users using their carried MDs to support added services, such as geo-advertisement, security, and management. Indoor and outdoor hotspot areas play a significant role for such services. However, GPS does not work in many of these areas. To solve this problem, service providers leverage available indoor radio technologies, such as WiFi, GSM, and LTE, to identify and localize users. We focus our research on passive services provided by third parties, which are responsible for (i) data acquisition and (ii) processing, and network-based services, where (i) and (ii) are done inside the serving network. For better understanding of parameters that affect indoor localization, we investigate several factors that affect indoor signal propagation for both Bluetooth and WiFi technologies. For GSM-based passive services, we developed first a data acquisition module: a GSM receiver that can overhear GSM uplink messages transmitted by MDs while being invisible. A set of optimizations were made for the receiver components to support wideband capturing of the GSM spectrum while operating in real-time. Processing the wide-spectrum of the GSM is possible using a proposed distributed processing approach over an IP network. Then, to overcome the lack of information about tracked devices’ radio settings, we developed two novel localization algorithms that rely on proximity-based solutions to estimate in real environments devices’ locations. Given the challenging indoor environment on radio signals, such as NLOS reception and multipath propagation, we developed an original algorithm to detect and remove contaminated radio signals before being fed to the localization algorithm. To improve the localization algorithm, we extended our work with a hybrid based approach that uses both WiFi and GSM interfaces to localize users. For network-based services, we used a software implementation of a LTE base station to develop our algorithms, which characterize the indoor environment before applying the localization algorithm. Experiments were conducted without any special hardware, any prior knowledge of the indoor layout or any offline calibration of the system.
Resumo:
Content-Centric Networking (CCN) naturally supports multi-path communication, as it allows the simultaneous use of multiple interfaces (e.g. LTE and WiFi). When multiple sources and multiple clients are considered, the optimal set of distribution trees should be determined in order to optimally use all the available interfaces. This is not a trivial task, as it is a computationally intense procedure that should be done centrally. The need for central coordination can be removed by employing network coding, which also offers improved resiliency to errors and large throughput gains. In this paper, we propose NetCodCCN, a protocol for integrating network coding in CCN. In comparison to previous works proposing to enable network coding in CCN, NetCodCCN permit Interest aggregation and Interest pipelining, which reduce the data retrieval times. The experimental evaluation shows that the proposed protocol leads to significant improvements in terms of content retrieval delay compared to the original CCN. Our results demonstrate that the use of network coding adds robustness to losses and permits to exploit more efficiently the available network resources. The performance gains are verified for content retrieval in various network scenarios.