452 resultados para QoS
Resumo:
We present a mathematically rigorous Quality-of-Service (QoS) metric which relates the achievable quality of service metric (QoS) for a real-time analytics service to the server energy cost of offering the service. Using a new iso-QoS evaluation methodology, we scale server resources to meet QoS targets and directly rank the servers in terms of their energy-efficiency and by extension cost of ownership. Our metric and method are platform-independent and enable fair comparison of datacenter compute servers with significant architectural diversity, including micro-servers. We deploy our metric and methodology to compare three servers running financial option pricing workloads on real-life market data. We find that server ranking is sensitive to data inputs and desired QoS level and that although scale-out micro-servers can be up to two times more energy-efficient than conventional heavyweight servers for the same target QoS, they are still six times less energy efficient than high-performance computational accelerators.
Resumo:
Uncertainty profiles are used to study the effects of contention within cloud and service-based environments. An uncertainty profile provides a qualitative description of an environment whose quality of service (QoS) may fluctuate unpredictably. Uncertain environments are modelled by strategic games with two agents; a daemon is used to represent overload and high resource contention; an angel is used to represent an idealised resource allocation situation with no underlying contention. Assessments of uncertainty profiles are useful in two ways: firstly, they provide a broad understanding of how environmental stress can effect an application’s performance (and reliability); secondly, they allow the effects of introducing redundancy into a computation to be assessed
Resumo:
We present a rigorous methodology and new metrics for fair comparison of server and microserver platforms. Deploying our methodology and metrics, we compare a microserver with ARM cores against two servers with ×86 cores running the same real-time financial analytics workload. We define workload-specific but platform-independent performance metrics for platform comparison, targeting both datacenter operators and end users. Our methodology establishes that a server based on the Xeon Phi co-processor delivers the highest performance and energy efficiency. However, by scaling out energy-efficient microservers, we achieve competitive or better energy efficiency than a power-equivalent server with two Sandy Bridge sockets, despite the microserver's slower cores. Using a new iso-QoS metric, we find that the ARM microserver scales enough to meet market throughput demand, that is, a 100% QoS in terms of timely option pricing, with as little as 55% of the energy consumed by the Sandy Bridge server.
Resumo:
This paper presents a thorough performance analysis of dual-hop cognitive amplify-and-forward (AF) relaying networks under spectrum-sharing mechanism over independent non-identically distributed (i.n.i.d.) 􀀀 fading channels. In order to guarantee the quality-of-service (QoS) of primary networks, both maximum tolerable peak interference power Q at the primary users (PUs) and maximum allowable transmit power P at secondary users (SUs) are considered to constrain transmit power at the cognitive transmitters. For integer-valued fading parameters, a closed-form lower bound for the outage probability (OP) of the considered networks is obtained. Moreover, assuming arbitrary-valued fading parameters, the lower bound in integral form for the OP is derived. In order to obtain further insights on the OP performance, asymptotic expressions for the OP at high SNRs are derived, from which the diversity/coding gains and the diversity-multiplexing gain tradeoff (DMT) of the secondary network can be readily deduced. It is shown that the diversity gain and also the DMT are solely determined by the fading parameters of the secondary network whereas the primary network only affects the coding gain. The derived results include several others available in previously published works as special cases, such as those for Nakagami-m fading channels. In addition, performance evaluation results have been obtained by Monte Carlo computer simulations which have verified the accuracy of the theoretical analysis.
Resumo:
NanoStreams explores the design, implementation,and system software stack of micro-servers aimed at processingdata in-situ and in real time. These micro-servers can serve theemerging Edge computing ecosystem, namely the provisioningof advanced computational, storage, and networking capabilitynear data sources to achieve both low latency event processingand high throughput analytical processing, before consideringoff-loading some of this processing to high-capacity datacentres.NanoStreams explores a scale-out micro-server architecture thatcan achieve equivalent QoS to that of conventional rack-mountedservers for high-capacity datacentres, but with dramaticallyreduced form factors and power consumption. To this end,NanoStreams introduces novel solutions in programmable & con-figurable hardware accelerators, as well as the system softwarestack used to access, share, and program those accelerators.Our NanoStreams micro-server prototype has demonstrated 5.5×higher energy-efficiency than a standard Xeon Server. Simulationsof the microserver’s memory system extended to leveragehybrid DDR/NVM main memory indicated 5× higher energyefficiencythan a conventional DDR-based system.
Resumo:
Na última década tem-se assistido a um crescimento exponencial das redes de comunicações sem fios, nomeadamente no que se refere a taxa de penetração do serviço prestado e na implementação de novas infra-estruturas em todo o globo. É ponto assente neste momento que esta tendência irá não só continuar como se fortalecer devido à convergência que é esperada entre as redes móveis sem fio e a disponibilização de serviços de banda larga para a rede Internet fixa, numa evolução para um paradigma de uma arquitectura integrada e baseada em serviços e aplicações IP. Por este motivo, as comunicações móveis sem fios irão ter um papel fundamental no desenvolvimento da sociedade de informação a médio e longo prazos. A estratégia seguida no projecto e implementação das redes móveis celulares da actual geração (2G e 3G) foi a da estratificação da sua arquitectura protocolar numa estrutura modular em camadas estanques, onde cada camada do modelo é responsável pela implementação de um conjunto de funcionalidades. Neste modelo a comunicação dá-se apenas entre camadas adjacentes através de primitivas de comunicação pré-estabelecidas. Este modelo de arquitectura resulta numa mais fácil implementação e introdução de novas funcionalidades na rede. Entretanto, o facto das camadas inferiores do modelo protocolar não utilizarem informação disponibilizada pelas camadas superiores, e vice-versa acarreta uma degradação no desempenho do sistema. Este paradigma é particularmente importante quando sistemas de antenas múltiplas são implementados (sistemas MIMO). Sistemas de antenas múltiplas introduzem um grau adicional de liberdade no que respeita a atribuição de recursos rádio: o domínio espacial. Contrariamente a atribuição de recursos no domínio do tempo e da frequência, no domínio espacial os recursos rádio mapeados no domínio espacial não podem ser assumidos como sendo completamente ortogonais, devido a interferência resultante do facto de vários terminais transmitirem no mesmo canal e/ou slots temporais mas em feixes espaciais diferentes. Sendo assim, a disponibilidade de informação relativa ao estado dos recursos rádio às camadas superiores do modelo protocolar é de fundamental importância na satisfação dos critérios de qualidade de serviço exigidos. Uma forma eficiente de gestão dos recursos rádio exige a implementação de algoritmos de agendamento de pacotes de baixo grau de complexidade, que definem os níveis de prioridade no acesso a esses recursos por base dos utilizadores com base na informação disponibilizada quer pelas camadas inferiores quer pelas camadas superiores do modelo. Este novo paradigma de comunicação, designado por cross-layer resulta na maximização da capacidade de transporte de dados por parte do canal rádio móvel, bem como a satisfação dos requisitos de qualidade de serviço derivados a partir da camada de aplicação do modelo. Na sua elaboração, procurou-se que o standard IEEE 802.16e, conhecido por Mobile WiMAX respeitasse as especificações associadas aos sistemas móveis celulares de quarta geração. A arquitectura escalonável, o baixo custo de implementação e as elevadas taxas de transmissão de dados resultam num processo de multiplexagem de dados e valores baixos no atraso decorrente da transmissão de pacotes, os quais são atributos fundamentais para a disponibilização de serviços de banda larga. Da mesma forma a comunicação orientada à comutação de pacotes, inenente na camada de acesso ao meio, é totalmente compatível com as exigências em termos da qualidade de serviço dessas aplicações. Sendo assim, o Mobile WiMAX parece satisfazer os requisitos exigentes das redes móveis de quarta geração. Nesta tese procede-se à investigação, projecto e implementação de algoritmos de encaminhamento de pacotes tendo em vista a eficiente gestão do conjunto de recursos rádio nos domínios do tempo, frequência e espacial das redes móveis celulares, tendo como caso prático as redes móveis celulares suportadas no standard IEEE802.16e. Os algoritmos propostos combinam métricas provenientes da camada física bem como os requisitos de qualidade de serviço das camadas superiores, de acordo com a arquitectura de redes baseadas no paradigma do cross-layer. O desempenho desses algoritmos é analisado a partir de simulações efectuadas por um simulador de sistema, numa plataforma que implementa as camadas física e de acesso ao meio do standard IEEE802.16e.
Resumo:
In the last decade, mobile wireless communications have witnessed an explosive growth in the user’s penetration rate and their widespread deployment around the globe. In particular, a research topic of particular relevance in telecommunications nowadays is related to the design and implementation of mobile communication systems of 4th generation (4G). 4G networks will be characterized by the support of multiple radio access technologies in a core network fully compliant with the Internet Protocol (all IP paradigms). Such networks will sustain the stringent quality of service (QoS) requirements and the expected high data rates from the type of multimedia applications (i.e. YouTube and Skype) to be available in the near future. Therefore, 4G wireless communications system will be of paramount importance on the development of the information society in the near future. As 4G wireless services will continue to increase, this will put more and more pressure on the spectrum availability. There is a worldwide recognition that methods of spectrum managements have reached their limit and are no longer optimal, therefore new paradigms must be sought. Studies show that most of the assigned spectrum is under-utilized, thus the problem in most cases is inefficient spectrum management rather spectrum shortage. There are currently trends towards a more liberalized approach of spectrum management, which are tightly linked to what is commonly termed as Cognitive Radio (CR). Furthermore, conventional deployment of 4G wireless systems (one BS in cell and mobile deploy around it) are known to have problems in providing fairness (users closer to the BS are more benefited relatively to the cell edge users) and in covering some zones affected by shadowing, therefore the use of relays has been proposed as a solution. To evaluate and analyse the performances of 4G wireless systems software tools are normally used. Software tools have become more and more mature in recent years and their need to provide a high level evaluation of proposed algorithms and protocols is now more important. The system level simulation (SLS) tools provide a fundamental and flexible way to test all the envisioned algorithms and protocols under realistic conditions, without the need to deal with the problems of live networks or reduced scope prototypes. Furthermore, the tools allow network designers a rapid collection of a wide range of performance metrics that are useful for the analysis and optimization of different algorithms. This dissertation proposes the design and implementation of conventional system level simulator (SLS), which afterwards enhances for the 4G wireless technologies namely cognitive Radios (IEEE802.22) and Relays (IEEE802.16j). SLS is then used for the analysis of proposed algorithms and protocols.
Resumo:
O acesso ubíquo à Internet é um dos principais desafios para os operadores de telecomunicações na próxima década. O número de utilizadores da Internet está a crescer exponencialmente e o paradigma de acesso "always connected, anytime, anywhere" é um requisito fundamental para as redes móveis de próxima geração. A tecnologia WiMAX, juntamente com o LTE, foi recentemente reconhecida pelo ITU como uma das tecnologias de acesso compatíveis com os requisitos do 4G. Ainda assim, esta tecnologia de acesso não está completamente preparada para ambientes de próxima geração, principalmente devido à falta de mecanismos de cross-layer para integração de QoS e mobilidade. Adicionalmente, para além das tecnologias WiMAX e LTE, as tecnologias de acesso rádio UMTS/HSPA e Wi-Fi continuarão a ter um impacto significativo nas comunicações móveis durante os próximos anos. Deste modo, é fundamental garantir a coexistência das várias tecnologias de acesso rádio em termos de QoS e mobilidade, permitindo assim a entrega de serviços multimédia de tempo real em redes móveis. Para garantir a entrega de serviços multimédia a utilizadores WiMAX, esta Tese propõe um gestor cross-layer WiMAX integrado com uma arquitectura de QoS fim-a-fim. A arquitectura apresentada permite o controlo de QoS e a comunicação bidireccional entre o sistema WiMAX e as entidades das camadas superiores. Para além disso, o gestor de cross-layer proposto é estendido com eventos e comandos genéricos e independentes da tecnologia para optimizar os procedimentos de mobilidade em ambientes WiMAX. Foram realizados testes para avaliar o desempenho dos procedimentos de QoS e mobilidade da arquitectura WiMAX definida, demonstrando que esta é perfeitamente capaz de entregar serviços de tempo real sem introduzir custos excessivos na rede. No seguimento das extensões de QoS e mobilidade apresentadas para a tecnologia WiMAX, o âmbito desta Tese foi alargado para ambientes de acesso sem-fios heterogéneos. Neste sentido, é proposta uma arquitectura de mobilidade transparente com suporte de QoS para redes de acesso multitecnologia. A arquitectura apresentada integra uma versão estendida do IEEE 802.21 com suporte de QoS, bem como um gestor de mobilidade avançado integrado com os protocolos de gestão de mobilidade do nível IP. Finalmente, para completar o trabalho desenvolvido no âmbito desta Tese, é proposta uma extensão aos procedimentos de decisão de mobilidade em ambientes heterogéneos para incorporar a informação de contexto da rede e do terminal. Para validar e avaliar as optimizações propostas, foram desenvolvidos testes de desempenho num demonstrador inter-tecnologia, composta pelas redes de acesso WiMAX, Wi-Fi e UMTS/HSPA.
Resumo:
The expectations of citizens from the Information Technologies (ITs) are increasing as the ITs have become integral part of our society, serving all kinds of activities whether professional, leisure, safety-critical applications or business. Hence, the limitations of the traditional network designs to provide innovative and enhanced services and applications motivated a consensus to integrate all services over packet switching infrastructures, using the Internet Protocol, so as to leverage flexible control and economical benefits in the Next Generation Networks (NGNs). However, the Internet is not capable of treating services differently while each service has its own requirements (e.g., Quality of Service - QoS). Therefore, the need for more evolved forms of communications has driven to radical changes of architectural and layering designs which demand appropriate solutions for service admission and network resources control. This Thesis addresses QoS and network control issues, aiming to improve overall control performance in current and future networks which classify services into classes. The Thesis is divided into three parts. In the first part, we propose two resource over-reservation algorithms, a Class-based bandwidth Over-Reservation (COR) and an Enhanced COR (ECOR). The over-reservation means reserving more bandwidth than a Class of Service (CoS) needs, so the QoS reservation signalling rate is reduced. COR and ECOR allow for dynamically defining over-reservation parameters for CoSs based on network interfaces resource conditions; they aim to reduce QoS signalling and related overhead without incurring CoS starvation or waste of bandwidth. ECOR differs from COR by allowing for optimizing control overhead minimization. Further, we propose a centralized control mechanism called Advanced Centralization Architecture (ACA), that uses a single state-full Control Decision Point (CDP) which maintains a good view of its underlying network topology and the related links resource statistics on real-time basis to control the overall network. It is very important to mention that, in this Thesis, we use multicast trees as the basis for session transport, not only for group communication purposes, but mainly to pin packets of a session mapped to a tree to follow the desired tree. Our simulation results prove a drastic reduction of QoS control signalling and the related overhead without QoS violation or waste of resources. Besides, we provide a generic-purpose analytical model to assess the impact of various parameters (e.g., link capacity, session dynamics, etc.) that generally challenge resource overprovisioning control. In the second part of this Thesis, we propose a decentralization control mechanism called Advanced Class-based resource OverpRovisioning (ACOR), that aims to achieve better scalability than the ACA approach. ACOR enables multiple CDPs, distributed at network edge, to cooperate and exchange appropriate control data (e.g., trees and bandwidth usage information) such that each CDP is able to maintain a good knowledge of the network topology and the related links resource statistics on real-time basis. From scalability perspective, ACOR cooperation is selective, meaning that control information is exchanged dynamically among only the CDPs which are concerned (correlated). Moreover, the synchronization is carried out through our proposed concept of Virtual Over-Provisioned Resource (VOPR), which is a share of over-reservations of each interface to each tree that uses the interface. Thus, each CDP can process several session requests over a tree without requiring synchronization between the correlated CDPs as long as the VOPR of the tree is not exhausted. Analytical and simulation results demonstrate that aggregate over-reservation control in decentralized scenarios keep low signalling without QoS violations or waste of resources. We also introduced a control signalling protocol called ACOR Protocol (ACOR-P) to support the centralization and decentralization designs in this Thesis. Further, we propose an Extended ACOR (E-ACOR) which aggregates the VOPR of all trees that originate at the same CDP, and more session requests can be processed without synchronization when compared with ACOR. In addition, E-ACOR introduces a mechanism to efficiently track network congestion information to prevent unnecessary synchronization during congestion time when VOPRs would exhaust upon every session request. The performance evaluation through analytical and simulation results proves the superiority of E-ACOR in minimizing overall control signalling overhead while keeping all advantages of ACOR, that is, without incurring QoS violations or waste of resources. The last part of this Thesis includes the Survivable ACOR (SACOR) proposal to support stable operations of the QoS and network control mechanisms in case of failures and recoveries (e.g., of links and nodes). The performance results show flexible survivability characterized by fast convergence time and differentiation of traffic re-routing under efficient resource utilization i.e. without wasting bandwidth. In summary, the QoS and architectural control mechanisms proposed in this Thesis provide efficient and scalable support for network control key sub-systems (e.g., QoS and resource control, traffic engineering, multicasting, etc.), and thus allow for optimizing network overall control performance.
Resumo:
The Internet as a video distribution medium has seen a tremendous growth in recent years. Currently, the transmission of major live events and TV channels over the Internet can easily reach hundreds or millions of users trying to receive the same content using very distinct receiver terminals, placing both scalability and heterogeneity challenges to content and network providers. In private and well-managed Internet Protocol (IP) networks these types of distributions are supported by specially designed architectures, complemented with IP Multicast protocols and Quality of Service (QoS) solutions. However, the Best-Effort and Unicast nature of the Internet requires the introduction of a new set of protocols and related architectures to support the distribution of these contents. In the field of file and non-real time content distributions this has led to the creation and development of several Peer-to-Peer protocols that have experienced great success in recent years. This chapter presents the current research and developments in Peer-to-Peer video streaming over the Internet. A special focus is made on peer protocols, associated architectures and video coding techniques. The authors also review and describe current Peer-to-Peer streaming solutions. © 2013, IGI Global.
Resumo:
The Joint Video Team, composed by the ISO/IEC Moving Picture Experts Group (MPEG) and the ITU-T Video Coding Experts Group (VCEG), has standardized a scalable extension of the H.264/AVC video coding standard called Scalable Video Coding (SVC). H.264/SVC provides scalable video streams which are composed by a base layer and one or more enhancement layers. Enhancement layers may improve the temporal, the spatial or the signal-to-noise ratio resolutions of the content represented by the lower layers. One of the applications, of this standard is related to video transmission in both wired and wireless communication systems, and it is therefore important to analyze in which way packet losses contribute to the degradation of quality, and which mechanisms could be used to improve that quality. This paper provides an analysis and evaluation of H.264/SVC in error prone environments, quantifying the degradation caused by packet losses in the decoded video. It also proposes and analyzes the consequences of QoS-based discarding of packets through different marking solutions.
Resumo:
A oferta de serviços baseados em comunicações sem fios tem vindo a crescer exponencialmente na última década. Cada vez mais são exigidas maiores taxas de transmissão assim como uma melhor QoS, sem comprometer a potência de transmissão ou argura de banda disponível. A tecnologia MIMO consegue oferecer um aumento da capacidade destes sistemas sem requerer aumento da largura de banda ou da potência transmitida. O trabalho desenvolvido nesta dissertação consistiu no estudo dos sistemas MIMO, caracterizados pela utilização de múltiplas antenas para transmitir e receber a informação. Com um sistema deste tipo consegue-se obter um ganho de diversidade espacial utilizando códigos espaço-temporais, que exploram simultaneamente o domínio espacial e o domínio do tempo. Nesta dissertação é dado especial ênfase à codificação por blocos no espaço-tempo de Alamouti, a qual será implementada em FPGA, nomeadamente a parte de recepção. Esta implementação é efectuada para uma configuração de antenas 2x1, utilizando vírgula flutuante e para três tipos de modulação: BPSK, QPSK e 16-QAM. Por fim será analisada a relação entre a precisão alcançada na representação numérica dos resultados e os recursos consumidos pela FPGA. Com a arquitectura adoptada conseguem se obter taxas de transferência na ordem dos 29,141 Msimb/s (sem pipelines) a 262,674 Msimb/s (com pipelines), para a modulação BPSK.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações