391 resultados para QoS WAP Palvelunlaatu


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A Internet é responsável pelo surgimento de um novo paradigma de televisão – IPTV (Televisão sobre IP). Este serviço distingue-se de outros modelos de televisão, pois permite aos utilizadores um elevado grau de interactividade, com um controlo personalizado sobre os conteúdos a que pretende assistir. Possibilita ainda a oferta de um número ilimitado de canais, bem como o acesso a conteúdos de Vídeo on Demand (VoD). O IPTV apresenta diversas funcionalidades suportadas por uma arquitectura complexa e uma rede convergente que serve de integração a serviços de voz, dados e vídeo. A tecnologia IPTV explora ao máximo as características da Internet, com a utilização de mecanismos de Qualidade de Serviço. Surge ainda como uma revolução dentro do panorama televisivo, abrindo portas a novos investimentos por parte das empresas de telecomunicações. A Internet também permite fazer chamadas telefónicas sobre a rede IP. Este serviço é denominado VoIP (Voz sobre IP) e encontra-se em funcionamento já há algum tempo. Desta forma surge a oportunidade de poder oferecer ao consumidor final, um serviço que inclua os serviços de Internet, de VoIP e de IPTV denominado serviço Triple Play. O serviço Triple Play veio obrigar a revisão de toda a rede de transporte de forma a preparar a mesma para suportar este serviço de uma forma eficiente (QoS), resiliente (recuperação de falhas) e optimizado (Engenharia de tráfego). Em redes de telecomunicações, tanto a quebra de uma ligação como a congestão nas redes pode interferir nos serviços oferecidos aos consumidores finais. Mecanismos de sobrevivência são aplicados de forma a garantir a continuidade do serviço mesmo na ocorrência de uma falha. O objectivo desta dissertação é propor uma solução de uma arquitectura de rede capaz de suportar o serviço Triple Play de uma forma eficiente, resiliente e optimizada através de um encaminhamento óptimo ou quase óptimo. No âmbito deste trabalho, é realizada a análise do impacto das estratégias de encaminhamento que garantem a eficiência, sobrevivência e optimização das redes IP existentes, bem como é determinado o número limite de clientes permitido numa situação de pico de uma dada rede. Neste trabalho foram abordados os conceitos de Serviços Triple Play, Redes de Acesso, Redes Núcleo, Qualidade de Serviço, MPLS (Multi-Protocolo Label Switching), Engenharia de Tráfego e Recuperação de falhas. As conclusões obtidas das simulações efectuadas através do simulador de rede NS-2.33 (Network Simulator versão 2.33) serviram para propor a solução da arquitectura de uma rede capaz de suportar o serviço Triple Play de uma forma eficiente, resiliente e optimizada.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the last decade mobile wireless communications have witnessed an explosive growth in the user’s penetration rate and their widespread deployment around the globe. It is expected that this tendency will continue to increase with the convergence of fixed Internet wired networks with mobile ones and with the evolution to the full IP architecture paradigm. Therefore mobile wireless communications will be of paramount importance on the development of the information society of the near future. In particular a research topic of particular relevance in telecommunications nowadays is related to the design and implementation of mobile communication systems of 4th generation. 4G networks will be characterized by the support of multiple radio access technologies in a core network fully compliant with the Internet Protocol (all IP paradigm). Such networks will sustain the stringent quality of service (QoS) requirements and the expected high data rates from the type of multimedia applications to be available in the near future. The approach followed in the design and implementation of the mobile wireless networks of current generation (2G and 3G) has been the stratification of the architecture into a communication protocol model composed by a set of layers, in which each one encompasses some set of functionalities. In such protocol layered model, communications is only allowed between adjacent layers and through specific interface service points. This modular concept eases the implementation of new functionalities as the behaviour of each layer in the protocol stack is not affected by the others. However, the fact that lower layers in the protocol stack model do not utilize information available from upper layers, and vice versa, downgrades the performance achieved. This is particularly relevant if multiple antenna systems, in a MIMO (Multiple Input Multiple Output) configuration, are implemented. MIMO schemes introduce another degree of freedom for radio resource allocation: the space domain. Contrary to the time and frequency domains, radio resources mapped into the spatial domain cannot be assumed as completely orthogonal, due to the amount of interference resulting from users transmitting in the same frequency sub-channel and/or time slots but in different spatial beams. Therefore, the availability of information regarding the state of radio resources, from lower to upper layers, is of fundamental importance in the prosecution of the levels of QoS expected from those multimedia applications. In order to match applications requirements and the constraints of the mobile radio channel, in the last few years researches have proposed a new paradigm for the layered architecture for communications: the cross-layer design framework. In a general way, the cross-layer design paradigm refers to a protocol design in which the dependence between protocol layers is actively exploited, by breaking out the stringent rules which restrict the communication only between adjacent layers in the original reference model, and allowing direct interaction among different layers of the stack. An efficient management of the set of available radio resources demand for the implementation of efficient and low complexity packet schedulers which prioritize user’s transmissions according to inputs provided from lower as well as upper layers in the protocol stack, fully compliant with the cross-layer design paradigm. Specifically, efficiently designed packet schedulers for 4G networks should result in the maximization of the capacity available, through the consideration of the limitations imposed by the mobile radio channel and comply with the set of QoS requirements from the application layer. IEEE 802.16e standard, also named as Mobile WiMAX, seems to comply with the specifications of 4G mobile networks. The scalable architecture, low cost implementation and high data throughput, enable efficient data multiplexing and low data latency, which are attributes essential to enable broadband data services. Also, the connection oriented approach of Its medium access layer is fully compliant with the quality of service demands from such applications. Therefore, Mobile WiMAX seems to be a promising 4G mobile wireless networks candidate. In this thesis it is proposed the investigation, design and implementation of packet scheduling algorithms for the efficient management of the set of available radio resources, in time, frequency and spatial domains of the Mobile WiMAX networks. The proposed algorithms combine input metrics from physical layer and QoS requirements from upper layers, according to the crosslayer design paradigm. Proposed schedulers are evaluated by means of system level simulations, conducted in a system level simulation platform implementing the physical and medium access control layers of the IEEE802.16e standard.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O controlo de banda larga é um conceito importante quando lidamos com redes de larga escala. Os ISPs precisam de garantir disponibilidade e qualidade de serviço a todos os clientes, enquanto garantem que a rede como um todo não fica mais lenta. Para garantir isto, é necessário que os ISPs recolham dados de tráfego, analisem-nos e usem-nos para definir a velocidade de banda larga de cada cliente. A NOS Madeira implementou, durante vários anos, um sistema semelhante. No entanto, este sistema encontrava-se obsoleto, sendo necessário construir um novo, totalmente de raíz. Entre as limitações encontrava-se a impossibilidade de alterar os algoritmos de análise de tráfego, fraca integração com os serviços de gestão de rede da NOS Madeira e reduzida escalabilidade e modularidade. O sistema IP Network Usage Accounting é a resposta a estes problemas. Este projeto foca-se no desenvolvimento do subsistema Accounting System, o segundo dos três subsistemas que compõem o sistema IP Network Usage Accounting. Este subsistema, implementado com sucesso e atualmente em produção na NOS Madeira, é responsável por analisar os dados referidos acima e usar os resultados dessa análise para direcionar a disponibilidade de banda larga, de acordo com o uso da rede de cada cliente.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The acquisition of oligosaccharides from chitosan has been the subject of several studies in the pharmaceutical, biochemical, food and medical due to functional properties of these compounds. This study aimed to boost its production of chitooligosaccharides (COS) through the optimization of production and characterization of chitosanolytic enzymes secreted by microorganisms Paenibacillus chitinolyticus and Paenibacillus ehimensis, and evaluating the antioxidant potential of the products obtained. In the process of optimizing the production of chitosanase were employed strategies Fractional Factorial Experimental Design and Central Composite Rotatable Design. The results identified the chitosan, peptone and yeast extract as the components that influenced the production of chitosanase by these microorganisms. With the optimization of the culture media was possible to obtain an increase of approximately 8.1 times (from 0.043 to 0.35 U.mL U.mL-1) and 7.6 times (from 0.08 U.mL-1 to 0.61 U.mL-1) in the enzymatic activity of chitosanase produced by P. chitinolyticus and P. ehimensis respectively. Enzyme complexes showed high stability in temperature ranges between 30º and 55º C and pH between 5.0 and 9.0. Has seen the share of organic solvents, divalent ions and other chemical agents on the activity of these enzymes, demonstrating high stability of these crude complexes and dependence of Mn2+. The COS generated showed the ability of DPPH radical scavenging activity, reaching a maximum rate of scavenging of 61% and 39% when they were produced with enzymes of P. ehimensis and P. chitinolyticus respectively. The use of these enzymes in raw form might facilitate its use for industrial applications

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis proposes the specification and performance analysis of a real-time communication mechanism for IEEE 802.11/11e standard. This approach is called Group Sequential Communication (GSC). The GSC has a better performance for dealing with small data packets when compared to the HCCA mechanism by adopting a decentralized medium access control using a publish/subscribe communication scheme. The main objective of the thesis is the HCCA overhead reduction of the Polling, ACK and QoS Null frames exchanged between the Hybrid Coordinator and the polled stations. The GSC eliminates the polling scheme used by HCCA scheduling algorithm by using a Virtual Token Passing procedure among members of the real-time group to whom a high-priority and sequential access to communication medium is granted. In order to improve the reliability of the mechanism proposed into a noisy channel, it is presented an error recovery scheme called second chance algorithm. This scheme is based on block acknowledgment strategy where there is a possibility of retransmitting when missing real-time messages. Thus, the GSC mechanism maintains the real-time traffic across many IEEE 802.11/11e devices, optimized bandwidth usage and minimal delay variation for data packets in the wireless network. For validation purpose of the communication scheme, the GSC and HCCA mechanisms have been implemented in network simulation software developed in C/C++ and their performance results were compared. The experiments show the efficiency of the GSC mechanism, especially in industrial communication scenarios.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

New multimedia applications that use the Internet as a communication media are pressing for the development of new technologies, such as: MPLS (Multiprotocol Label Switching) and DiffServ. These technologies introduce new and powerful features to the Internet backbone, as the provision of QoS (Quality of Service) capabilities. However, to obtain a true end-to-end QoS, it is not enough to implement such technologies in the network core, it becomes indispensable to extend such improvements to the access networks, what is the aim of the several works presently under development. To contribute to this process, this Thesis presents the RSVP-SVC (Resource Reservation Protocol Switched Virtual Connection) that consists in an extension of RSVP-TE. The RSVP-SVC is presented herein as a mean to support a true end-to-end QoS, through the extension of MPLS scope. Thus, it is specified a Switched Virtual Connection (SVC) service to be used in the context of a MPLS User-to-Network Interface (MPLS UNI), that is able to efficiently establish and activate Label Switched Paths (LSP), starting from the access routers that satisfy the QoS requirements demanded by the applications. The RSVP-SVC was specified in Estelle, a Formal Description Technique (FDT) standardized by ISO. The edition, compilation, verification and simulation of RSVP-SVC were made by the EDT (Estelle Development Toolset) software. The benefits and most important issues to be considered when using the proposed protocol are also included

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development of wireless sensor networks for control and monitoring functions has created a vibrant investigation scenario, covering since communication aspects to issues related with energy efficiency. When source sensors are endowed with cameras for visual monitoring, a new scope of challenges is raised, as transmission and monitoring requirements are considerably changed. Particularly, visual sensors collect data following a directional sensing model, altering the meaning of concepts as vicinity and redundancy but allowing the differentiation of source nodes by their sensing relevancies for the application. In such context, we propose the combined use of two differentiation strategies as a novel QoS parameter, exploring the sensing relevancies of source nodes and DWT image coding. This innovative approach supports a new scope of optimizations to improve the performance of visual sensor networks at the cost of a small reduction on the overall monitoring quality of the application. Besides definition of a new concept of relevance and the proposition of mechanisms to support its practical exploitation, we propose five different optimizations in the way images are transmitted in wireless visual sensor networks, aiming at energy saving, transmission with low delay and error recovery. Putting all these together, the proposed innovative differentiation strategies and the related optimizations open a relevant research trend, where the application monitoring requirements are used to guide a more efficient operation of sensor networks

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents the performanee analysis of traffie retransmission algorithms pro¬posed to the HCCA medium aeeess meehanism of IEEE 802.11 e standard applied to industrial environmen1. Due to the nature of this kind of environment, whieh has eleetro¬magnetic interferenee, and the wireless medium of IEEE 802.11 standard, suseeptible to such interferenee, plus the lack of retransmission meehanisms, refers to an impraetieable situation to ensure quality of service for real-time traffic, to whieh the IEEE 802.11 e stan¬dard is proposed and this environment requires. Thus, to solve this problem, this paper proposes a new approach that involves the ereation and evaluation of retransmission al-gorithms in order to ensure a levei of robustness, reliability and quality of serviee to the wireless communication in such environments. Thus, according to this approaeh, if there is a transmission error, the traffie scheduler is able to manage retransmissions to reeo¬ver data 10s1. The evaluation of the proposed approaeh is performed through simulations, where the retransmission algorithms are applied to different seenarios, whieh are abstrae¬tions of an industrial environment, and the results are obtained by using an own-developed network simulator and compared with eaeh other to assess whieh of the algorithms has better performanee in a pre-defined applieation

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work deals with experimental studies about VoIP conections into WiFi 802.11b networks with handoff. Indoor and outdoor network experiments are realised to take measurements for the QoS parameters delay, throughput, jitter and packt loss. The performance parameters are obtained through the use of software tools Ekiga, Iperf and Wimanager that assure, respectvely, VoIP conection simulation, trafic network generator and metric parameters acquisition for, throughput, jitter and packt loss. The avarage delay is obtained from the measured throughput and the concept of packt virtual transmition time. The experimental data are validated based on de QoS level for each metric parameter accepted as adequated by the specialized literature

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modern wireless systems employ adaptive techniques to provide high throughput while observing desired coverage, Quality of Service (QoS) and capacity. An alternative to further enhance data rate is to apply cognitive radio concepts, where a system is able to exploit unused spectrum on existing licensed bands by sensing the spectrum and opportunistically access unused portions. Techniques like Automatic Modulation Classification (AMC) could help or be vital for such scenarios. Usually, AMC implementations rely on some form of signal pre-processing, which may introduce a high computational cost or make assumptions about the received signal which may not hold (e.g. Gaussianity of noise). This work proposes a new method to perform AMC which uses a similarity measure from the Information Theoretic Learning (ITL) framework, known as correntropy coefficient. It is capable of extracting similarity measurements over a pair of random processes using higher order statistics, yielding in better similarity estimations than by using e.g. correlation coefficient. Experiments carried out by means of computer simulation show that the technique proposed in this paper presents a high rate success in classification of digital modulation, even in the presence of additive white gaussian noise (AWGN)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Service provisioning is a challenging research area for the design and implementation of autonomic service-oriented software systems. It includes automated QoS management for such systems and their applications. Monitoring, Diagnosis and Repair are three key features of QoS management. This work presents a self-healing Web service-based framework that manages QoS degradation at runtime. Our approach is based on proxies. Proxies act on meta-level communications and extend the HTTP envelope of the exchanged messages with QoS-related parameter values. QoS Data are filtered over time and analysed using statistical functions and the Hidden Markov Model. Detected QoS degradations are handled with proxies. We experienced our framework using an orchestrated electronic shop application (FoodShop).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To manage the complexity associated with the management of multimedia distributed systems, a solution must incorporate concepts of middleware in order to hide specific hardware and operating systems aspects. Applications in these systems can be implemented in different types of platforms, and the components of these systems must interact each with the other. Because of the variability of the state of the platforms implementation, a flexible approach should allow dynamic substitution of components in order to ensure the level of QoS of the running application . In this context, this work presents an approach in the layer of middleware that we are proposing for supporting dynamic substitution of components in the context the Cosmos framework , starting with the choice of target component, rising taking the decision, which, among components candidates will be chosen and concluding with the process defined for the exchange. The approach was defined considering the Cosmos QoS model and how it deals with dynamic reconfiguration

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Distributed multimedia systems have highly variable characteristics, resulting in new requirements while new technologies become available or in the need for adequacy in accordance with the amount of available resources. So, these systems should provide support for dynamic adaptations in order to adjust their structures and behaviors at runtime. This paper presents an approach to adaptation model-based and proposes a reflective and component-based framework for construction and support of self-adaptive distributed multimedia systems, providing many facilities for the development and evolution of such systems, such as dynamic adaptation. The propose is to keep one or more models to represent the system at runtime, so some external entity can perform an analysis of these models by identifying problems and trying to solve them. These models integrate the reflective meta-level, acting as a system self-representation. The framework defines a meta-model for description of self-adaptive distributed multimedia applications, which can represent components and their relationships, policies for QoS specification and adaptation actions. Additionally, this paper proposes an ADL and architecture for model-based adaptation. As a case study, this paper presents some scenarios to demonstrate the application of the framework in practice, with and without the use of ADL, as well as check some characteristics related to dynamic adaptation

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the advance of the Cloud Computing paradigm, a single service offered by a cloud platform may not be enough to meet all the application requirements. To fulfill such requirements, it may be necessary, instead of a single service, a composition of services that aggregates services provided by different cloud platforms. In order to generate aggregated value for the user, this composition of services provided by several Cloud Computing platforms requires a solution in terms of platforms integration, which encompasses the manipulation of a wide number of noninteroperable APIs and protocols from different platform vendors. In this scenario, this work presents Cloud Integrator, a middleware platform for composing services provided by different Cloud Computing platforms. Besides providing an environment that facilitates the development and execution of applications that use such services, Cloud Integrator works as a mediator by providing mechanisms for building applications through composition and selection of semantic Web services that take into account metadata about the services, such as QoS (Quality of Service), prices, etc. Moreover, the proposed middleware platform provides an adaptation mechanism that can be triggered in case of failure or quality degradation of one or more services used by the running application in order to ensure its quality and availability. In this work, through a case study that consists of an application that use services provided by different cloud platforms, Cloud Integrator is evaluated in terms of the efficiency of the performed service composition, selection and adaptation processes, as well as the potential of using this middleware in heterogeneous computational clouds scenarios