975 resultados para Antenna Bandwidth
Resumo:
The authors focus on one of the methods for connection acceptance control (CAC) in an ATM network: the convolution approach. With the aim of reducing the cost in terms of calculation and storage requirements, they propose the use of the multinomial distribution function. This permits direct computation of the associated probabilities of the instantaneous bandwidth requirements. This in turn makes possible a simple deconvolution process. Moreover, under certain conditions additional improvements may be achieved
Resumo:
Next Generation Access Networks (NGAN) are the new step forward to deliver broadband services and to facilitate the integration of different technologies. It is plausible to assume that, from a technological standpoint, the Future Internet will be composed of long-range high-speed optical networks; a number of wireless networks at the edge; and, in between, several access technologies, among which, the Passive Optical Networks (xPON) are very likely to succeed, due to their simplicity, low-cost, and increased bandwidth. Among the different PON technologies, the Ethernet-PON (EPON) is the most promising alternative to satisfy operator and user needs, due to its cost, flexibility and interoperability with other technologies. One of the most interesting challenges in such technologies relates to the scheduling and allocation of resources in the upstream (shared) channel. The aim of this research project is to study and evaluate current contributions and propose new efficient solutions to address the resource allocation issues in Next Generation EPON (NG-EPON). Key issues in this context are future end-user needs, integrated quality of service (QoS) support and optimized service provisioning for real time and elastic flows. This project will unveil research opportunities, issue recommendations and propose novel mechanisms associated with the convergence within heterogeneous access networks and will thus serve as a basis for long-term research projects in this direction. The project has served as a platform for the generation of new concepts and solutions that were published in national and international conferences, scientific journals and also in book chapter. We expect some more research publications in addition to the ones mentioned to be generated in a few months.
Resumo:
La tomodensitométrie (CT) est une technique d'imagerie dont l'intérêt n'a cessé de croître depuis son apparition dans le début des années 70. Dans le domaine médical, son utilisation est incontournable à tel point que ce système d'imagerie pourrait être amené à devenir victime de son succès si son impact au niveau de l'exposition de la population ne fait pas l'objet d'une attention particulière. Bien évidemment, l'augmentation du nombre d'examens CT a permis d'améliorer la prise en charge des patients ou a rendu certaines procédures moins invasives. Toutefois, pour assurer que le compromis risque - bénéfice soit toujours en faveur du patient, il est nécessaire d'éviter de délivrer des doses non utiles au diagnostic.¦Si cette action est importante chez l'adulte elle doit être une priorité lorsque les examens se font chez l'enfant, en particulier lorsque l'on suit des pathologies qui nécessitent plusieurs examens CT au cours de la vie du patient. En effet, les enfants et jeunes adultes sont plus radiosensibles. De plus, leur espérance de vie étant supérieure à celle de l'adulte, ils présentent un risque accru de développer un cancer radio-induit dont la phase de latence peut être supérieure à vingt ans. Partant du principe que chaque examen radiologique est justifié, il devient dès lors nécessaire d'optimiser les protocoles d'acquisitions pour s'assurer que le patient ne soit pas irradié inutilement. L'avancée technologique au niveau du CT est très rapide et depuis 2009, de nouvelles techniques de reconstructions d'images, dites itératives, ont été introduites afin de réduire la dose et améliorer la qualité d'image.¦Le présent travail a pour objectif de déterminer le potentiel des reconstructions itératives statistiques pour réduire au minimum les doses délivrées lors d'examens CT chez l'enfant et le jeune adulte tout en conservant une qualité d'image permettant le diagnostic, ceci afin de proposer des protocoles optimisés.¦L'optimisation d'un protocole d'examen CT nécessite de pouvoir évaluer la dose délivrée et la qualité d'image utile au diagnostic. Alors que la dose est estimée au moyen d'indices CT (CTDIV0| et DLP), ce travail a la particularité d'utiliser deux approches radicalement différentes pour évaluer la qualité d'image. La première approche dite « physique », se base sur le calcul de métriques physiques (SD, MTF, NPS, etc.) mesurées dans des conditions bien définies, le plus souvent sur fantômes. Bien que cette démarche soit limitée car elle n'intègre pas la perception des radiologues, elle permet de caractériser de manière rapide et simple certaines propriétés d'une image. La seconde approche, dite « clinique », est basée sur l'évaluation de structures anatomiques (critères diagnostiques) présentes sur les images de patients. Des radiologues, impliqués dans l'étape d'évaluation, doivent qualifier la qualité des structures d'un point de vue diagnostique en utilisant une échelle de notation simple. Cette approche, lourde à mettre en place, a l'avantage d'être proche du travail du radiologue et peut être considérée comme méthode de référence.¦Parmi les principaux résultats de ce travail, il a été montré que les algorithmes itératifs statistiques étudiés en clinique (ASIR?, VEO?) ont un important potentiel pour réduire la dose au CT (jusqu'à-90%). Cependant, par leur fonctionnement, ils modifient l'apparence de l'image en entraînant un changement de texture qui pourrait affecter la qualité du diagnostic. En comparant les résultats fournis par les approches « clinique » et « physique », il a été montré que ce changement de texture se traduit par une modification du spectre fréquentiel du bruit dont l'analyse permet d'anticiper ou d'éviter une perte diagnostique. Ce travail montre également que l'intégration de ces nouvelles techniques de reconstruction en clinique ne peut se faire de manière simple sur la base de protocoles utilisant des reconstructions classiques. Les conclusions de ce travail ainsi que les outils développés pourront également guider de futures études dans le domaine de la qualité d'image, comme par exemple, l'analyse de textures ou la modélisation d'observateurs pour le CT.¦-¦Computed tomography (CT) is an imaging technique in which interest has been growing since it first began to be used in the early 1970s. In the clinical environment, this imaging system has emerged as the gold standard modality because of its high sensitivity in producing accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase of the number of CT examinations performed has raised concerns about the potential negative effects of ionizing radiation on the population. To insure a benefit - risk that works in favor of a patient, it is important to balance image quality and dose in order to avoid unnecessary patient exposure.¦If this balance is important for adults, it should be an absolute priority for children undergoing CT examinations, especially for patients suffering from diseases requiring several follow-up examinations over the patient's lifetime. Indeed, children and young adults are more sensitive to ionizing radiation and have an extended life span in comparison to adults. For this population, the risk of developing cancer, whose latency period exceeds 20 years, is significantly higher than for adults. Assuming that each patient examination is justified, it then becomes a priority to optimize CT acquisition protocols in order to minimize the delivered dose to the patient. Over the past few years, CT advances have been developing at a rapid pace. Since 2009, new iterative image reconstruction techniques, called statistical iterative reconstructions, have been introduced in order to decrease patient exposure and improve image quality.¦The goal of the present work was to determine the potential of statistical iterative reconstructions to reduce dose as much as possible without compromising image quality and maintain diagnosis of children and young adult examinations.¦The optimization step requires the evaluation of the delivered dose and image quality useful to perform diagnosis. While the dose is estimated using CT indices (CTDIV0| and DLP), the particularity of this research was to use two radically different approaches to evaluate image quality. The first approach, called the "physical approach", computed physical metrics (SD, MTF, NPS, etc.) measured on phantoms in well-known conditions. Although this technique has some limitations because it does not take radiologist perspective into account, it enables the physical characterization of image properties in a simple and timely way. The second approach, called the "clinical approach", was based on the evaluation of anatomical structures (diagnostic criteria) present on patient images. Radiologists, involved in the assessment step, were asked to score image quality of structures for diagnostic purposes using a simple rating scale. This approach is relatively complicated to implement and also time-consuming. Nevertheless, it has the advantage of being very close to the practice of radiologists and is considered as a reference method.¦Primarily, this work revealed that the statistical iterative reconstructions studied in clinic (ASIR? and VECO have a strong potential to reduce CT dose (up to -90%). However, by their mechanisms, they lead to a modification of the image appearance with a change in image texture which may then effect the quality of the diagnosis. By comparing the results of the "clinical" and "physical" approach, it was showed that a change in texture is related to a modification of the noise spectrum bandwidth. The NPS analysis makes possible to anticipate or avoid a decrease in image quality. This project demonstrated that integrating these new statistical iterative reconstruction techniques can be complex and cannot be made on the basis of protocols using conventional reconstructions. The conclusions of this work and the image quality tools developed will be able to guide future studies in the field of image quality as texture analysis or model observers dedicated to CT.
Resumo:
Abstract This thesis proposes a set of adaptive broadcast solutions and an adaptive data replication solution to support the deployment of P2P applications. P2P applications are an emerging type of distributed applications that are running on top of P2P networks. Typical P2P applications are video streaming, file sharing, etc. While interesting because they are fully distributed, P2P applications suffer from several deployment problems, due to the nature of the environment on which they perform. Indeed, defining an application on top of a P2P network often means defining an application where peers contribute resources in exchange for their ability to use the P2P application. For example, in P2P file sharing application, while the user is downloading some file, the P2P application is in parallel serving that file to other users. Such peers could have limited hardware resources, e.g., CPU, bandwidth and memory or the end-user could decide to limit the resources it dedicates to the P2P application a priori. In addition, a P2P network is typically emerged into an unreliable environment, where communication links and processes are subject to message losses and crashes, respectively. To support P2P applications, this thesis proposes a set of services that address some underlying constraints related to the nature of P2P networks. The proposed services include a set of adaptive broadcast solutions and an adaptive data replication solution that can be used as the basis of several P2P applications. Our data replication solution permits to increase availability and to reduce the communication overhead. The broadcast solutions aim, at providing a communication substrate encapsulating one of the key communication paradigms used by P2P applications: broadcast. Our broadcast solutions typically aim at offering reliability and scalability to some upper layer, be it an end-to-end P2P application or another system-level layer, such as a data replication layer. Our contributions are organized in a protocol stack made of three layers. In each layer, we propose a set of adaptive protocols that address specific constraints imposed by the environment. Each protocol is evaluated through a set of simulations. The adaptiveness aspect of our solutions relies on the fact that they take into account the constraints of the underlying system in a proactive manner. To model these constraints, we define an environment approximation algorithm allowing us to obtain an approximated view about the system or part of it. This approximated view includes the topology and the components reliability expressed in probabilistic terms. To adapt to the underlying system constraints, the proposed broadcast solutions route messages through tree overlays permitting to maximize the broadcast reliability. Here, the broadcast reliability is expressed as a function of the selected paths reliability and of the use of available resources. These resources are modeled in terms of quotas of messages translating the receiving and sending capacities at each node. To allow a deployment in a large-scale system, we take into account the available memory at processes by limiting the view they have to maintain about the system. Using this partial view, we propose three scalable broadcast algorithms, which are based on a propagation overlay that tends to the global tree overlay and adapts to some constraints of the underlying system. At a higher level, this thesis also proposes a data replication solution that is adaptive both in terms of replica placement and in terms of request routing. At the routing level, this solution takes the unreliability of the environment into account, in order to maximize reliable delivery of requests. At the replica placement level, the dynamically changing origin and frequency of read/write requests are analyzed, in order to define a set of replica that minimizes communication cost.
Resumo:
En els darrers anys, els sistemes de telemetria per a aplicacions mèdiques han crescut significativament en el diagnòstic i en la monitorització de, per exemple, la glucosa, la pressió de la sang, la temperatura, el ritme cardíac... Els dispositius implantats amplien les aplicacions en medicina i incorpora una millora de qualitat de vida per a l’usuari. Per aquest motiu, en aquest projecte s’estudien dues de les antenes més comuns, com son l’antena dipol i el patch, aquesta última és especialment utilitzada en aplicacions implantades. En l’anàlisi d’aquestes antenes s’han parametritzat característiques relacionades amb l’entorn de l’aplicació, així com també de la pròpia antena, explicant el comportament que, a diferencia amb l’espai lliure, les antenes presenten a canvis d’aquests paràmetres. Al mateix temps, s’ha implementat una configuració per a la mesura d’antenes implantades basat en el model del cos humà d’una capa. Comparant amb els resultats de les simulacions realitzades mitjançant el software FEKO, s’ha obtingut gran correspondència en la mesura empírica d’adaptació i de guany de les antenes microstrip. Gràcies a l’anàlisi paramètric, aquest projecte també presenta diversos dissenys de les antenes optimitzant el guany realitzable amb l’objectiu d’aconseguir la millor comunicació possible amb el dispositiu extern o estació base.
Resumo:
El creciente uso de dispositivos móviles y el gran avance en la mejora de las aplicaciones y sistemas inalámbricos ha impulsado la demanda de filtros paso banda miniaturizados, que trabajen a altas frecuencias y tengan unas prestaciones elevadas. Los filtros basados en resonadores Bulk Acoustic Wave (BAW) están siendo la mejor alternativa a los filtros Surface Acoustic Wave (SAW), ya que funcionan a frecuencias superiores, pueden trabajar a mayores niveles de potencia y son compatibles con la tecnología CMOS. El filtro en escalera, que utiliza resonadores BAW, es de momento la mejor opción, debido a su facilidad de diseño y su bajo coste de fabricación. Aunque el filtro con resonadores acoplados (CRF) presenta mejores prestaciones como mayor ancho de banda, menor tamaño y conversión de modos. El problema de este tipo de filtros reside en su complejidad de diseño y su elevado coste. Este trabajo lleva a cabo el diseño de un CRF a partir de unas especificaciones bastante estrictas, demostrando sus altas prestaciones a pesar de su mayor inconveniente: el coste de fabricación.
Resumo:
The optimization of the pilot overhead in single-user wireless fading channels is investigated, and the dependence of this overhead on various system parameters of interest (e.g., fading rate, signal-to-noise ratio) is quantified. The achievable pilot-based spectral efficiency is expanded with respect to the fading rate about the no-fading point, which leads to an accurate order expansion for the pilot overhead. This expansion identifies that the pilot overhead, as well as the spectral efficiency penalty with respect to a reference system with genie-aided CSI (channel state information) at the receiver, depend on the square root of the normalized Doppler frequency. It is also shown that the widely-used block fading model is a special case of more accurate continuous fading models in terms of the achievable pilot-based spectral efficiency. Furthermore, it is established that the overhead optimization for multiantenna systems is effectively the same as for single-antenna systems with the normalized Doppler frequency multiplied by the number of transmit antennas.
Resumo:
A contemporary perspective on the tradeoff between transmit antenna diversity andspatial multiplexing is provided. It is argued that, in the context of most modern wirelesssystems and for the operating points of interest, transmission techniques that utilizeall available spatial degrees of freedom for multiplexing outperform techniques that explicitlysacrifice spatial multiplexing for diversity. In the context of such systems, therefore,there essentially is no decision to be made between transmit antenna diversity and spatialmultiplexing in MIMO communication. Reaching this conclusion, however, requires thatthe channel and some key system features be adequately modeled and that suitable performancemetrics be adopted; failure to do so may bring about starkly different conclusions. Asa specific example, this contrast is illustrated using the 3GPP Long-Term Evolution systemdesign.
Resumo:
The analysis of the multiantenna capacity in the high-SNR regime has hitherto focused on the high-SNR slope (or maximum multiplexing gain), which quantifies the multiplicative increase as function of the number of antennas. This traditional characterization is unable to assess the impact of prominent channel features since, for a majority of channels, the slope equals the minimum of the number of transmit and receive antennas. Furthermore, a characterization based solely on the slope captures only the scaling but it has no notion of the power required for a certain capacity. This paper advocates a more refined characterization whereby, as function of SNRjdB, the high-SNR capacity is expanded as an affine function where the impact of channel features such as antenna correlation, unfaded components, etc, resides in the zero-order term or power offset. The power offset, for which we find insightful closed-form expressions, is shown to play a chief role for SNR levels of practical interest.
Resumo:
The simultaneous use of multiple transmit and receive antennas can unleash very large capacity increases in rich multipath environments. Although such capacities can be approached by layered multi-antenna architectures with per-antenna rate control, the need for short-term feedback arises as a potential impediment, in particular as the number of antennas—and thus the number of rates to be controlled—increases. What we show, however, is that the need for short-term feedback in fact vanishes as the number of antennas and/or the diversity order increases. Specifically, the rate supported by each transmit antenna becomes deterministic and a sole function of the signal-to-noise, the ratio of transmit and receive antennas, and the decoding order, all of which are either fixed or slowly varying. More generally, we illustrate -through this specific derivation— the relevance of some established random CDMA results to the single-user multi-antenna problem.
Resumo:
Exact closed-form expressions are obtained for the outage probability of maximal ratio combining in η-μ fadingchannels with antenna correlation and co-channel interference. The scenario considered in this work assumes the joint presence of background white Gaussian noise and independent Rayleigh-faded interferers with arbitrary powers. Outage probability results are obtained through an appropriate generalization of the moment-generating function of theη-μ fading distribution, for which new closed-form expressions are provided.
Resumo:
The purpose of this research was to summarize existing nondestructive test methods that have the potential to be used to detect materials-related distress (MRD) in concrete pavements. The various nondestructive test methods were then subjected to selection criteria that helped to reduce the size of the list so that specific techniques could be investigated in more detail. The main test methods that were determined to be applicable to this study included two stress-wave propagation techniques (impact-echo and spectral analysis of surface waves techniques), infrared thermography, ground penetrating radar (GPR), and visual inspection. The GPR technique was selected for a preliminary round of “proof of concept” trials. GPR surveys were carried out over a variety of portland cement concrete pavements for this study using two different systems. One of the systems was a state-of-the-art GPR system that allowed data to be collected at highway speeds. The other system was a less sophisticated system that was commercially available. Surveys conducted with both sets of equipment have produced test results capable of identifying subsurface distress in two of the three sites that exhibited internal cracking due to MRD. Both systems failed to detect distress in a single pavement that exhibited extensive cracking. Both systems correctly indicated that the control pavement exhibited negligible evidence of distress. The initial positive results presented here indicate that a more thorough study (incorporating refinements to the system, data collection, and analysis) is needed. Improvements in the results will be dependent upon defining the optimum number and arrangement of GPR antennas to detect the most common problems in Iowa pavements. In addition, refining highfrequency antenna response characteristics will be a crucial step toward providing an optimum GPR system for detecting materialsrelated distress.
Resumo:
The spectral efficiency achievable with joint processing of pilot and data symbol observations is compared with that achievable through the conventional (separate) approach of first estimating the channel on the basis of the pilot symbols alone, and subsequently detecting the datasymbols. Studied on the basis of a mutual information lower bound, joint processing is found to provide a non-negligible advantage relative to separate processing, particularly for fast fading. It is shown that, regardless of the fading rate, only a very small number of pilot symbols (at most one per transmit antenna and per channel coherence interval) shouldbe transmitted if joint processing is allowed.
Resumo:
ADSL is becoming the standard form of residential and small-business broadband Internet access due to, primarily, its low deployment cost. These ADSL residential lines are often deployed with 802.11 Access Points (AP) that providewireless connectivity. Given the density of ADSL deployment, it is often possible for a residential wireless client to be in range of several other APs, belonging to neighbors, with ADSL connectivity. While the ADSL technology has showed evident limits in terms of capacity (with speeds ranging 1-10 Mbps), the short-range wireless communication can guarantee a muchhigher capacity (up to 20 Mbps). Furthermore, the ADSL links in the neighborhood are generally under-utilized, since ADSL subscribers do not connect 100% of the time. Therefore, it is possible for a wireless client to simultaneously connect to several APs in range and effectively aggregate their available ADSL bandwidth.In this paper, we introduce ClubADSL, a wireless client that can simultaneously connect to several APs in range on different frequencies and aggregate both their downlink and uplink capacity. ClubADSL is a software that runs locally on the client-side, and it requires neither modification to the existing Internet infrastructure, nor any hardware/protocol upgradesto the 802.11 local area network. We show the feasibility of ClubADSL in seamlessly transmitting TCP traffic, and validate its implementation both in controlled scenarios and with current applications over real ADSL lines. In particular we show that a ClubADSL client can greatly benefit from the aggregated download bandwidth in the case of server-client applications such as video streaming, but can also take advantage of the increased upload bandwidth greatly reducing download times with incentive-based P2P applications such as BitTorrent.
Resumo:
A contemporary perspective on the tradeoff between transmit antenna diversity and spatial multi-plexing is provided. It is argued that, in the context of modern cellular systems and for the operating points of interest, transmission techniques that utilize all available spatial degrees of freedom for multiplexingoutperform techniques that explicitly sacrifice spatialmultiplexing for diversity. Reaching this conclusion, however, requires that the channel and some key system features be adequately modeled; failure to do so may bring about starkly different conclusions. As a specific example, this contrast is illustrated using the 3GPP Long-Term Evolution system design.