885 resultados para Quality of Service (QoS)
Resumo:
Les réseaux maillés sans fil (RMSF), grâce à leurs caractéristiques avantageuses, sont considérés comme une solution efficace pour le support des services de voix, vidéo et de données dans les réseaux de prochaine génération. Le standard IEEE 802.16-d a spécifié pour les RMSF, à travers son mode maillé, deux mécanismes de planifications de transmission de données; à savoir la planification centralisée et la planification distribuée. Dans ce travail, on a évalué le support de la qualité de service (QdS) du standard en se focalisant sur la planification distribuée. Les problèmes du système dans le support du trafic de voix ont été identifiés. Pour résoudre ces problèmes, on a proposé un protocole pour le support de VoIP (AVSP) en tant qu’extension au standard original pour permettre le support de QdS au VoIP. Nos résultats préliminaires de simulation montrent qu’AVSP offre une bonne amélioration au support de VoIP.
Resumo:
Les centres d’appels sont des éléments clés de presque n’importe quelle grande organisation. Le problème de gestion du travail a reçu beaucoup d’attention dans la littérature. Une formulation typique se base sur des mesures de performance sur un horizon infini, et le problème d’affectation d’agents est habituellement résolu en combinant des méthodes d’optimisation et de simulation. Dans cette thèse, nous considérons un problème d’affection d’agents pour des centres d’appels soumis a des contraintes en probabilité. Nous introduisons une formulation qui exige que les contraintes de qualité de service (QoS) soient satisfaites avec une forte probabilité, et définissons une approximation de ce problème par moyenne échantillonnale dans un cadre de compétences multiples. Nous établissons la convergence de la solution du problème approximatif vers celle du problème initial quand la taille de l’échantillon croit. Pour le cas particulier où tous les agents ont toutes les compétences (un seul groupe d’agents), nous concevons trois méthodes d’optimisation basées sur la simulation pour le problème de moyenne échantillonnale. Étant donné un niveau initial de personnel, nous augmentons le nombre d’agents pour les périodes où les contraintes sont violées, et nous diminuons le nombre d’agents pour les périodes telles que les contraintes soient toujours satisfaites après cette réduction. Des expériences numériques sont menées sur plusieurs modèles de centre d’appels à faible occupation, au cours desquelles les algorithmes donnent de bonnes solutions, i.e. la plupart des contraintes en probabilité sont satisfaites, et nous ne pouvons pas réduire le personnel dans une période donnée sont introduire de violation de contraintes. Un avantage de ces algorithmes, par rapport à d’autres méthodes, est la facilité d’implémentation.
Resumo:
L’émergence de nouvelles applications et de nouveaux services (tels que les applications multimédias, la voix-sur-IP, la télévision-sur-IP, la vidéo-sur-demande, etc.) et le besoin croissant de mobilité des utilisateurs entrainent une demande de bande passante de plus en plus croissante et une difficulté dans sa gestion dans les réseaux cellulaires sans fil (WCNs), causant une dégradation de la qualité de service. Ainsi, dans cette thèse, nous nous intéressons à la gestion des ressources, plus précisément à la bande passante, dans les WCNs. Dans une première partie de la thèse, nous nous concentrons sur la prédiction de la mobilité des utilisateurs des WCNs. Dans ce contexte, nous proposons un modèle de prédiction de la mobilité, relativement précis qui permet de prédire la destination finale ou intermédiaire et, par la suite, les chemins des utilisateurs mobiles vers leur destination prédite. Ce modèle se base sur : (a) les habitudes de l’utilisateur en terme de déplacements (filtrées selon le type de jour et le moment de la journée) ; (b) le déplacement courant de l’utilisateur ; (c) la connaissance de l’utilisateur ; (d) la direction vers une destination estimée ; et (e) la structure spatiale de la zone de déplacement. Les résultats de simulation montrent que ce modèle donne une précision largement meilleure aux approches existantes. Dans la deuxième partie de cette thèse, nous nous intéressons au contrôle d’admission et à la gestion de la bande passante dans les WCNs. En effet, nous proposons une approche de gestion de la bande passante comprenant : (1) une approche d’estimation du temps de transfert intercellulaire prenant en compte la densité de la zone de déplacement en terme d’utilisateurs, les caractéristiques de mobilité des utilisateurs et les feux tricolores ; (2) une approche d’estimation de la bande passante disponible à l’avance dans les cellules prenant en compte les exigences en bande passante et la durée de vie des sessions en cours ; et (3) une approche de réservation passive de bande passante dans les cellules qui seront visitées pour les sessions en cours et de contrôle d’admission des demandes de nouvelles sessions prenant en compte la mobilité des utilisateurs et le comportement des cellules. Les résultats de simulation indiquent que cette approche réduit largement les ruptures abruptes de sessions en cours, offre un taux de refus de nouvelles demandes de connexion acceptable et un taux élevé d’utilisation de la bande passante. Dans la troisième partie de la thèse, nous nous penchons sur la principale limite de la première et deuxième parties de la thèse, à savoir l’évolutivité (selon le nombre d’utilisateurs) et proposons une plateforme qui intègre des modèles de prédiction de mobilité avec des modèles de prédiction de la bande passante disponible. En effet, dans les deux parties précédentes de la thèse, les prédictions de la mobilité sont effectuées pour chaque utilisateur. Ainsi, pour rendre notre proposition de plateforme évolutive, nous proposons des modèles de prédiction de mobilité par groupe d’utilisateurs en nous basant sur : (a) les profils des utilisateurs (c’est-à-dire leur préférence en termes de caractéristiques de route) ; (b) l’état du trafic routier et le comportement des utilisateurs ; et (c) la structure spatiale de la zone de déplacement. Les résultats de simulation montrent que la plateforme proposée améliore la performance du réseau comparée aux plateformes existantes qui proposent des modèles de prédiction de la mobilité par groupe d’utilisateurs pour la réservation de bande passante.
Resumo:
Die Technologie dienstorientierter Architekturen (Service-oriented Architectures, kurz SOA) weckt große Visionen auf Seiten der Industrie wie auch der Forschung. Sie hat sich als derzeit ideale Lösung für Umgebungen, in denen sich die Anforderungen an die IT-Bedürfnisse rapide ändern, erwiesen. Heutige IT-Systeme müssen Managementaufgaben wie Softwareinstallation, -anpassung oder -austausch erlauben, ohne dabei den laufenden Betrieb wesentlich zu stören. Die dafür nötige Flexibilität bieten dienstorientierte Architekturen, in denen Softwarekomponenten in Form von Diensten zur Verfügung stehen. Ein Dienst bietet über seine Schnittstelle lokalen wie entfernten Applikationen einen Zugang zu seiner Funktionalität. Wir betrachten im Folgenden nur solche dienstorientierte Architekturen, in denen Dienste zur Laufzeit dynamisch entdeckt, gebunden, komponiert, verhandelt und adaptiert werden können. Eine Applikation kann mit unterschiedlichen Diensten arbeiten, wenn beispielsweise Dienste ausfallen oder ein neuer Dienst die Anforderungen der Applikation besser erfüllt. Eine unserer Grundvoraussetzungen lautet somit, dass sowohl das Dienstangebot als auch die Nachfrageseite variabel sind. Dienstorientierte Architekturen haben besonderes Gewicht in der Implementierung von Geschäftsprozessen. Im Rahmen des Paradigmas Enterprise Integration Architecture werden einzelne Arbeitsschritte als Dienste implementiert und ein Geschäftsprozess als Workflow von Diensten ausgeführt. Eine solche Dienstkomposition wird auch Orchestration genannt. Insbesondere für die so genannte B2B-Integration (Business-to-Business) sind Dienste das probate Mittel, um die Kommunikation über die Unternehmensgrenzen hinaus zu unterstützen. Dienste werden hier in der Regel als Web Services realisiert, welche vermöge BPEL4WS orchestriert werden. Der XML-basierte Nachrichtenverkehr und das http-Protokoll sorgen für eine Verträglichkeit zwischen heterogenen Systemen und eine Transparenz des Nachrichtenverkehrs. Anbieter dieser Dienste versprechen sich einen hohen Nutzen durch ihre öffentlichen Dienste. Zum einen hofft man auf eine vermehrte Einbindung ihrer Dienste in Softwareprozesse. Zum anderen setzt man auf das Entwickeln neuer Software auf Basis ihrer Dienste. In der Zukunft werden hunderte solcher Dienste verfügbar sein und es wird schwer für den Entwickler passende Dienstangebote zu finden. Das Projekt ADDO hat in diesem Umfeld wichtige Ergebnisse erzielt. Im Laufe des Projektes wurde erreicht, dass der Einsatz semantischer Spezifikationen es ermöglicht, Dienste sowohl im Hinblick auf ihre funktionalen als auch ihre nicht-funktionalen Eigenschaften, insbesondere die Dienstgüte, automatisch zu sichten und an Dienstaggregate zu binden [15]. Dazu wurden Ontologie-Schemata [10, 16], Abgleichalgorithmen [16, 9] und Werkzeuge entwickelt und als Framework implementiert [16]. Der in diesem Rahmen entwickelte Abgleichalgorithmus für Dienstgüte beherrscht die automatische Aushandlung von Verträgen für die Dienstnutzung, um etwa kostenpflichtige Dienste zur Dienstnutzung einzubinden. ADDO liefert einen Ansatz, Schablonen für Dienstaggregate in BPEL4WS zu erstellen, die zur Laufzeit automatisch verwaltet werden. Das Vorgehen konnte seine Effektivität beim internationalen Wettbewerb Web Service Challenge 2006 in San Francisco unter Beweis stellen: Der für ADDO entwickelte Algorithmus zur semantischen Dienstkomposition erreichte den ersten Platz. Der Algorithmus erlaubt es, unter einer sehr großenMenge angebotener Dienste eine geeignete Auswahl zu treffen, diese Dienste zu Dienstaggregaten zusammenzufassen und damit die Funktionalität eines vorgegebenen gesuchten Dienstes zu leisten. Weitere Ergebnisse des Projektes ADDO wurden auf internationalen Workshops und Konferenzen veröffentlicht. [12, 11]
Resumo:
In diesem Bericht werden die Ergebnisse und Fortschritte des Forschungsprojekts ADDOaction vorgestellt. Durch die Entwicklung in den letzten Jahrzehnten wurde das Internet zu einer wichtigen Infrastruktur für Geschäftsprozesse. Beliebige Anwendungen können als Dienste angeboten und übers Internet den Kunden online zur Verfügung gestellt werden. Eine flexible Dienstarchitektur ist dabei durch einen gewissen Grad an Dynamik gekennzeichnet, wo Dienste angepasst, ausgetauscht oder entfernt werden können und eventuell gleichzeitig von verschiedenen Anbietern bereitgestellt werden können. Dienste müssen dabei sowohl die funktionalen als auch die nicht-funktionalen Quality of Service (QoS) Anforderungen der Klienten erfüllen, um Kundenzufriedenheit garantieren zu können. Die Vielzahl der angebotenen Dienste und die unterschiedlichen Anforderungen der Klienten machen eine manuelle Entdeckung und ein manuelles Management der Dienste praktisch unmöglich. ADDOaction adressiert genau diese Probleme einer dienstorientierten Architektur und liefert innovative Lösungen von der automatischen Entdeckung von Diensten bis hin zur Überwachung und zum Management von Diensten zur Laufzeit.
Resumo:
In the B-ISDN there is a provision for four classes of services, all of them supported by a single transport network (the ATM network). Three of these services, the connected oriented (CO) ones, permit connection access control (CAC) but the fourth, the connectionless oriented (CLO) one, does not. Therefore, when CLO service and CO services have to share the same ATM link, a conflict may arise. This is because a bandwidth allocation to obtain maximum statistical gain can damage the contracted ATM quality of service (QOS); and vice versa, in order to guarantee the contracted QOS, the statistical gain have to be sacrificed. The paper presents a performance evaluation study of the influence of the CLO service on a CO service (a circuit emulation service or a variable bit-rate service) when sharing the same link
Resumo:
Network diagnosis in Wireless Sensor Networks (WSNs) is a difficult task due to their improvisational nature, invisibility of internal running status, and particularly since the network structure can frequently change due to link failure. To solve this problem, we propose a Mobile Sink (MS) based distributed fault diagnosis algorithm for WSNs. An MS, or mobile fault detector is usually a mobile robot or vehicle equipped with a wireless transceiver that performs the task of a mobile base station while also diagnosing the hardware and software status of deployed network sensors. Our MS mobile fault detector moves through the network area polling each static sensor node to diagnose the hardware and software status of nearby sensor nodes using only single hop communication. Therefore, the fault detection accuracy and functionality of the network is significantly increased. In order to maintain an excellent Quality of Service (QoS), we employ an optimal fault diagnosis tour planning algorithm. In addition to saving energy and time, the tour planning algorithm excludes faulty sensor nodes from the next diagnosis tour. We demonstrate the effectiveness of the proposed algorithms through simulation and real life experimental results.
Resumo:
During the last decade, the Internet usage has been growing at an enormous rate which has beenaccompanied by the developments of network applications (e.g., video conference, audio/videostreaming, E-learning, E-Commerce and real-time applications) and allows several types ofinformation including data, voice, picture and media streaming. While end-users are demandingvery high quality of service (QoS) from their service providers, network undergoes a complex trafficwhich leads the transmission bottlenecks. Considerable effort has been made to study thecharacteristics and the behavior of the Internet. Simulation modeling of computer networkcongestion is a profitable and effective technique which fulfills the requirements to evaluate theperformance and QoS of networks. To simulate a single congested link, simulation is run with asingle load generator while for a larger simulation with complex traffic, where the nodes are spreadacross different geographical locations generating distributed artificial loads is indispensable. Onesolution is to elaborate a load generation system based on master/slave architecture.
Resumo:
In the last decade mobile wireless communications have witnessed an explosive growth in the user’s penetration rate and their widespread deployment around the globe. It is expected that this tendency will continue to increase with the convergence of fixed Internet wired networks with mobile ones and with the evolution to the full IP architecture paradigm. Therefore mobile wireless communications will be of paramount importance on the development of the information society of the near future. In particular a research topic of particular relevance in telecommunications nowadays is related to the design and implementation of mobile communication systems of 4th generation. 4G networks will be characterized by the support of multiple radio access technologies in a core network fully compliant with the Internet Protocol (all IP paradigm). Such networks will sustain the stringent quality of service (QoS) requirements and the expected high data rates from the type of multimedia applications to be available in the near future. The approach followed in the design and implementation of the mobile wireless networks of current generation (2G and 3G) has been the stratification of the architecture into a communication protocol model composed by a set of layers, in which each one encompasses some set of functionalities. In such protocol layered model, communications is only allowed between adjacent layers and through specific interface service points. This modular concept eases the implementation of new functionalities as the behaviour of each layer in the protocol stack is not affected by the others. However, the fact that lower layers in the protocol stack model do not utilize information available from upper layers, and vice versa, downgrades the performance achieved. This is particularly relevant if multiple antenna systems, in a MIMO (Multiple Input Multiple Output) configuration, are implemented. MIMO schemes introduce another degree of freedom for radio resource allocation: the space domain. Contrary to the time and frequency domains, radio resources mapped into the spatial domain cannot be assumed as completely orthogonal, due to the amount of interference resulting from users transmitting in the same frequency sub-channel and/or time slots but in different spatial beams. Therefore, the availability of information regarding the state of radio resources, from lower to upper layers, is of fundamental importance in the prosecution of the levels of QoS expected from those multimedia applications. In order to match applications requirements and the constraints of the mobile radio channel, in the last few years researches have proposed a new paradigm for the layered architecture for communications: the cross-layer design framework. In a general way, the cross-layer design paradigm refers to a protocol design in which the dependence between protocol layers is actively exploited, by breaking out the stringent rules which restrict the communication only between adjacent layers in the original reference model, and allowing direct interaction among different layers of the stack. An efficient management of the set of available radio resources demand for the implementation of efficient and low complexity packet schedulers which prioritize user’s transmissions according to inputs provided from lower as well as upper layers in the protocol stack, fully compliant with the cross-layer design paradigm. Specifically, efficiently designed packet schedulers for 4G networks should result in the maximization of the capacity available, through the consideration of the limitations imposed by the mobile radio channel and comply with the set of QoS requirements from the application layer. IEEE 802.16e standard, also named as Mobile WiMAX, seems to comply with the specifications of 4G mobile networks. The scalable architecture, low cost implementation and high data throughput, enable efficient data multiplexing and low data latency, which are attributes essential to enable broadband data services. Also, the connection oriented approach of Its medium access layer is fully compliant with the quality of service demands from such applications. Therefore, Mobile WiMAX seems to be a promising 4G mobile wireless networks candidate. In this thesis it is proposed the investigation, design and implementation of packet scheduling algorithms for the efficient management of the set of available radio resources, in time, frequency and spatial domains of the Mobile WiMAX networks. The proposed algorithms combine input metrics from physical layer and QoS requirements from upper layers, according to the crosslayer design paradigm. Proposed schedulers are evaluated by means of system level simulations, conducted in a system level simulation platform implementing the physical and medium access control layers of the IEEE802.16e standard.
Resumo:
Modern wireless systems employ adaptive techniques to provide high throughput while observing desired coverage, Quality of Service (QoS) and capacity. An alternative to further enhance data rate is to apply cognitive radio concepts, where a system is able to exploit unused spectrum on existing licensed bands by sensing the spectrum and opportunistically access unused portions. Techniques like Automatic Modulation Classification (AMC) could help or be vital for such scenarios. Usually, AMC implementations rely on some form of signal pre-processing, which may introduce a high computational cost or make assumptions about the received signal which may not hold (e.g. Gaussianity of noise). This work proposes a new method to perform AMC which uses a similarity measure from the Information Theoretic Learning (ITL) framework, known as correntropy coefficient. It is capable of extracting similarity measurements over a pair of random processes using higher order statistics, yielding in better similarity estimations than by using e.g. correlation coefficient. Experiments carried out by means of computer simulation show that the technique proposed in this paper presents a high rate success in classification of digital modulation, even in the presence of additive white gaussian noise (AWGN)
Resumo:
A crescente utilização dos serviços de telecomunicações principalmente sem fio tem exigido a adoção de novos padrões de redes que ofereçam altas taxas de transmissão e que alcance um número maior de usuários. Neste sentido o padrão IEEE 802.16, no qual é baseado o WiMAX, surge como uma tecnologia em potencial para o fornecimento de banda larga na próxima geração de redes sem fio, principalmente porque oferece Qualidade de Serviço (QoS) nativamente para fluxos de voz, dados e vídeo. A respeito das aplicações baseadas vídeo, tem ocorrido um grande crescimento nos últimos anos. Em 2011 a previsão é que esse tipo de conteúdo ultrapasse 50% de todo tráfego proveniente de dispositivos móveis. Aplicações do tipo vídeo têm um forte apelo ao usuário final que é quem de fato deve ser o avaliador do nível de qualidade recebida. Diante disso, são necessárias novas formas de avaliação de desempenho que levem em consideração a percepção do usuário, complementando assim as técnicas tradicionais que se baseiam apenas em aspectos de rede (QoS). Nesse sentido, surgiu a avaliação de desempenho baseada Qualidade de Experiência (QoE) onde a avaliação do usuário final em detrimento a aplicação é o principal parâmetro mensurado. Os resultados das investigações em QoE podem ser usados como uma extensão em detrimento aos tradicionais métodos de QoS, e ao mesmo tempo fornecer informações a respeito da entrega de serviços multimídias do ponto de vista do usuário. Exemplos de mecanismos de controle que poderão ser incluídos em redes com suporte a QoE são novas abordagens de roteamento, processo de seleção de estação base e tráfego condicionado. Ambas as metodologias de avaliação são complementares, e se usadas de forma combinada podem gerar uma avaliação mais robusta. Porém, a grande quantidade de informações dificulta essa combinação. Nesse contexto, esta dissertação tem como objetivo principal criar uma metodologia de predição de qualidade de vídeo em redes WiMAX com uso combinado de simulações e técnicas de Inteligência Computacional (IC). A partir de parâmetros de QoS e QoE obtidos através das simulações será realizado a predição do comportamento futuro do vídeo com uso de Redes Neurais Artificiais (RNA). Se por um lado o uso de simulações permite uma gama de opções como extrapolação de cenários de modo a imitar as mesmas situações do mundo real, as técnicas de IC permitem agilizar a análise dos resultados de modo que sejam feitos previsões de um comportamento futuro, correlações e outros. No caso deste trabalho, optou-se pelo uso de RNAs uma vez que é a técnica mais utilizada para previsão do comportamento, como está sendo proposto nesta dissertação.
Resumo:
Redes em Malha sem Fio ( do inglês Wireless Mesh Networks - WMNs) são previstas serem uma das mais importantes tecnologias sem fio no que se refere ao fornecimento do acesso de última milha em redes multimídia futuras. Elas vão permitir que milhares de usuários fixos e móveis acessem, produzam e compartilhem conteúdo multimídia de forma onipresente. Neste contexto, vídeo 3D está previsto atrair mais e mais o mercado multimídia com a perspectiva de reforçar as aplicações (vídeos de vigilância, controle demissões críticas, entretenimento, etc). No entanto, o desafio de lidar com a largura de banda optante, escassez de recursos e taxas de erros variantes com o tempo destas redes, ilustra a necessidade da transmissão de vídeos 3D mais resistentes a erros. Dessa forma, alternativas como abordagens de Correção Antecipada de Erros (FEC) se tornam necessárias para fornecer a distribuição de aplicações de vídeo para usuários sem fio com garantia de melhor qualidade de serviço (QoS) e Qualidade de Experiência (QoE). Esta dissertação apresenta um mecanismo baseado em FEC com Proteção Desigual de Erros (UEP) para melhorar a transmissão de vídeo 3D em WMNs, aumentando a satisfação do usuário e permitindo uma melhoria do uso dos recursos sem fio. Os benefícios e impactos do mecanismo proposto serão demonstrados usando simulação e a avaliação será realizada através de métricas de QoE objetivas e subjetivas.
Resumo:
ABSTRACT: The femtocell concept aims to combine fixed-line broadband access with mobile telephony using the deployment of low-cost, low-power third and fourth generation base stations in the subscribers' homes. While the self-configuration of femtocells is a plus, it can limit the quality of service (QoS) for the users and reduce the efficiency of the network, based on outdated allocation parameters such as signal power level. To this end, this paper presents a proposal for optimized allocation of users on a co-channel macro-femto network, that enable self-configuration and public access, aiming to maximize the quality of service of applications and using more efficiently the available energy, seeking the concept of Green networking. Thus, when the user needs to connect to make a voice or a data call, the mobile phone has to decide which network to connect, using the information of number of connections, the QoS parameters (packet loss and throughput) and the signal power level of each network. For this purpose, the system is modeled as a Markov Decision Process, which is formulated to obtain an optimal policy that can be applied on the mobile phone. The policy created is flexible, allowing different analyzes, and adaptive to the specific characteristics defined by the telephone company. The results show that compared to traditional QoS approaches, the policy proposed here can improve energy efficiency by up to 10%.
Resumo:
Wireless sensor networks (WSNs) are generally used to monitor hazardous events in inaccessible areas. Thus, on one hand, it is preferable to assure the adoption of the minimum transmission power in order to extend as much as possible the WSNs lifetime. On the other hand, it is crucial to guarantee that the transmitted data is correctly received by the other nodes. Thus, trading off power optimization and reliability insurance has become one of the most important concerns when dealing with modern systems based on WSN. In this context, we present a transmission power self-optimization (TPSO) technique for WSNs. The TPSO technique consists of an algorithm able to guarantee the connectivity as well as an equally high quality of service (QoS), concentrating on the WSNs efficiency (Ef), while optimizing the transmission power necessary for data communication. Thus, the main idea behind the proposed approach is to trade off WSNs Ef against energy consumption in an environment with inherent noise. Experimental results with different types of noise and electromagnetic interference (EMI) have been explored in order to demonstrate the effectiveness of the TPSO technique.
Resumo:
Computer and telecommunication networks are changing the world dramatically and will continue to do so in the foreseeable future. The Internet, primarily based on packet switches, provides very flexible data services such as e-mail and access to the World Wide Web. The Internet is a variable-delay, variable- bandwidth network that provides no guarantee on quality of service (QoS) in its initial phase. New services are being added to the pure data delivery framework of yesterday. Such high demands on capacity could lead to a “bandwidth crunch” at the core wide-area network, resulting in degradation of service quality. Fortunately, technological innovations have emerged which can provide relief to the end user to overcome the Internet’s well-known delay and bandwidth limitations. At the physical layer, a major overhaul of existing networks has been envisaged from electronic media (e.g., twisted pair and cable) to optical fibers - in wide-area, metropolitan-area, and even local-area settings. In order to exploit the immense bandwidth potential of optical fiber, interesting multiplexing techniques have been developed over the years.