37 resultados para offloading
Resumo:
The massive adoption of sophisticated mobile devices and applications led to the increase of mobile data in the last decade, which it is expected to continue. This increase of mobile data negatively impacts the network planning and dimension, since core networks are heavy centralized. Mobile operators are investigating atten network architectures that distribute the responsibility of providing connectivity and mobility, in order to improve the network scalability and performance. Moreover, service providers are moving the content servers closer to the user, in order to ensure high availability and performance of content delivery. Besides the e orts to overcome the explosion of mobile data, current mobility management models are heavy centralized to ensure reachability and session continuity to the users connected to the network. Nowadays, deployed architectures have a small number of centralized mobility anchors managing the mobile data and the mobility context of millions of users, which introduces issues related to performance and scalability that require costly network mechanisms. The mobility management needs to be rethought out-of-the box to cope with atten network architectures and distributed content servers closer to the user, which is the purpose of the work developed in this Thesis. The Thesis starts with a characterization of mobility management into well-de ned functional blocks, their interaction and potential grouping. The decentralized mobility management is studied through analytical models and simulations, in which di erent mobility approaches distinctly distribute the mobility management functionalities through the network. The outcome of this study showed that decentralized mobility management brings advantages. Hence, it was proposed a novel distributed and dynamic mobility management approach, which is exhaustively evaluated through analytical models, simulations and testbed experiments. The proposed approach is also integrated with seamless horizontal handover mechanisms, as well as evaluated in vehicular environments. The mobility mechanisms are also speci ed for multihomed scenarios, in order to provide data o oading with IP mobility from cellular to other access networks. In the pursuing of the optimized mobile routing path, a novel network-based strategy for localized mobility is addressed, in which a replication binding system is deployed in the mobility anchors distributed through the access routers and gateways. Finally, we go further in the mobility anchoring subject, presenting a context-aware adaptive IP mobility anchoring model that dynamically assigns the mobility anchors that provide the optimized routing path to a session, based on the user and network context. The integration of dynamic and distributed concepts in the mobility management, such as context-aware adaptive mobility anchoring and dynamic mobility support, allow the optimization of network resources and the improvement of user experience. The overall outcome demonstrates that decentralized mobility management is a promising direction, hence, its ideas should be taken into account by mobile operators in the deployment of future networks.
Resumo:
Maintaining a high level of data security with a low impact on system performance is more challenging in wireless multimedia applications. Protocols that are used for wireless local area network (WLAN) security are known to significantly degrade performance. In this paper, we propose an enhanced security system for a WLAN. Our new design aims to decrease the processing delay and increase both the speed and throughput of the system, thereby making it more efficient for multimedia applications. Our design is based on the idea of offloading computationally intensive encryption and authentication services to the end systems’ CPUs. The security operations are performed by the hosts’ central processor (which is usually a powerful processor) before delivering the data to a wireless card (which usually has a low-performance processor). By adopting this design, we show that both the delay and the jitter are significantly reduced. At the access point, we improve the performance of network processing hardware for real-time cryptographic processing by using a specialized processor implemented with field-programmable gate array technology. Furthermore, we use enhanced techniques to implement the Counter (CTR) Mode with Cipher Block Chaining Message Authentication Code Protocol (CCMP) and the CTR protocol. Our experiments show that it requires timing in the range of 20–40 μs to perform data encryption and authentication on different end-host CPUs (e.g., Intel Core i5, i7, and AMD 6-Core) as compared with 10–50 ms when performed using the wireless card. Furthermore, when compared with the standard WiFi protected access II (WPA2), results show that our proposed security system improved the speed to up to 3.7 times.
Resumo:
The lower Silurian Whirlpool Sandstone is composed of two main units: a fluvial unit and an estuarine to transitional marine unit. The lowermost unit is made up of sandy braided fluvial deposits, in shallow valleys, that flowed towards the northwest. The fluvial channels are largely filled by cross-bedded, well sorted, quartzose sands, with little ripple crosslaminated or overbank shales. Erosionally overlying this lower unit are brackish water to marine deposits. In the east, this unit consists of estuarine channels and tidal flat deposits. The channels consist of fluvial sands at the base, changing upwards into brackish and tidally influenced channelized sandstones and shales. The estuarine channels flowed to the southwest. Westwards, the unit contains backbarrier facies with extensive washover deposits. Separating the backbarrier facies from shoreface sandstone facies to the west, are barrier island sands represented by barrier-foreshore facies. The barrier islands are dissected by tidal inlets characterized by fining upward abandonment sequences. Inlet deposits are also present west of the barrier island, abandoned by transgression on the shoreface. The sandy marine deposits are replaced to the west by carbonates of the Manitoulin Limestone. During the latest Ordovician, a hiatus in crustal loading during the Taconic Orogeny led to erosional offloading and crustal rebound, the eroded material distributed towards the west, northwest and north as the terrestrial deposits of the fluvial Whirlpool. The "anti-peripheral bulge" of the rebound interfered with the peripheral bulge of the Michigan Basin, nulling the Algonquin Arch, and allowing the detritus of the fluvial Whirlpool to spread onto the Algonquin Arch. The Taconic Orogeny resumed in the earliest Silurian with crustal loading to the south and southeast, and causing tilting of the surface slope in subsurface Lake Erie towards the ii southwest. Lowstand terrestrial deposits were scoured into the new slope. The new crustal loading also reactivated the peripheral bulge of the Appalachian Basin, allowing it to interact with the bulge of the Michigan Basin, raising the Algonquin Arch. The crustal loading depressed the Appalachian basin and allowed transgression to occur. The renewed Algonquin Arch allowed the early Silurian transgression to proceed up two slopes, one to the east and one to the west. The transgression to the east entered the lowstand valleys and created the estuarine Whirlpool. The rising arch caused progradation of the Manitoulin carbonates upon shoreface facies of the Whirlpool Sandstone and upon offshore facies of the Cabot Head Formation. Further crustal loading caused basin subsidence and rapid transgression, abandoning the Whirlpool estuary in an offshore setting.
Resumo:
Vortex-induced motion (VIM) is a highly nonlinear dynamic phenomenon. Usual spectral analysis methods, using the Fourier transform, rely on the hypotheses of linear and stationary dynamics. A method to treat nonstationary signals that emerge from nonlinear systems is denoted Hilbert-Huang transform (HHT) method. The development of an analysis methodology to study the VIM of a monocolumn production, storage, and offloading system using HHT is presented. The purposes of the present methodology are to improve the statistics analysis of VIM. The results showed to be comparable to results obtained from a traditional analysis (mean of the 10% highest peaks) particularly for the motions in the transverse direction, although the difference between the results from the traditional analysis for the motions in the in-line direction showed a difference of around 25%. The results from the HHT analysis are more reliable than the traditional ones, owing to the larger number of points to calculate the statistics characteristics. These results may be used to design risers and mooring lines, as well as to obtain VIM parameters to calibrate numerical predictions. [DOI: 10.1115/1.4003493]
Resumo:
Vortex-induced motion (VIM) is a specific way for naming the vortex-induced vibration (VIV) acting on floating units. The VIM phenomenon can occur in monocolumn production, storage and offloading system (MPSO) and spar platforms, structures presenting aspect ratio lower than 4 and unity mass ratio, i.e., structural mass equal to the displaced fluid mass. These platforms can experience motion amplitudes of approximately their characteristic diameters, and therefore, the fatigue life of mooring lines and risers can be greatly affected. Two degrees-of-freedom VIV model tests based on cylinders with low aspect ratio and small mass ratio have been carried out at the recirculating water channel facility available at NDF-EPUSP in order to better understand this hydro-elastic phenomenon. The tests have considered three circular cylinders of mass ratio equal to one and different aspect ratios, respectively L/D = 1.0, 1.7, and 2.0, as well as a fourth cylinder of mass ratio equal to 2.62 and aspect ratio of 2.0. The Reynolds number covered the range from 10 000 to 50 000, corresponding to reduced velocities from 1 to approximately 12. The results of amplitude and frequency in the transverse and in-line directions were analyzed by means of the Hilbert-Huang transform method (HHT) and then compared to those obtained from works found in the literature. The comparisons have shown similar maxima amplitudes for all aspect ratios and small mass ratio, featuring a decrease as the aspect ratio decreases. Moreover, some changes in the Strouhal number have been indirectly observed as a consequence of the decrease in the aspect ratio. In conclusion, it is shown that comparing results of small-scale platforms with those from bare cylinders, all of them presenting low aspect ratio and small mass ratio, the laboratory experiments may well be used in practical investigation, including those concerning the VIM phenomenon acting on platforms. [DOI: 10.1115/1.4006755]
Resumo:
This PhD Thesis is composed of three chapters, each discussing a specific type of risk that banks face. The first chapter talks about Systemic Risk and how banks get exposed to it through the Interbank Funding Market. Exposures in the said market have Systemic Risk implications because the market creates linkages, where the failure of one party can affect the others in the market. By showing that CDS Spreads, as bank risk indicators, are positively related to their Net Interbank Funding Market Exposures, this chapter establishes the above Systemic Risk Implications of Interbank Funding. Meanwhile, the second chapter discusses how banks may handle Illiquidity Risk, defined as the possibility of having sudden funding needs. Illiquidity Risk is embodied in this chapter through Loan Commitments as they oblige banks to lend to its clients, up to a certain amount of funds at any time. This chapter points out that using Securitization as funding facility, could allow the banks to manage this Illiquidity Risk. To make this case, this chapter demonstrates empirically that banks having an increase in Loan Commitments, may experience an increase in risk profile but such can be offset by an accompanying increase in Securitization Activity. Lastly, the third chapter focuses on how banks manage Credit Risk also through Securitization. Securitization has a Credit Risk management property by allowing the offloading of risk. This chapter investigates how banks use such property by looking at the effect of securitization on the banks’ loan portfolios and overall risk and returns. The findings are that securitization is positively related to loan portfolio size and the portfolio share of risky loans, which translates to higher risk and returns. Thus, this chapter points out that Credit Risk management through Securitization may be have been done towards higher risk taking for high returns.
Resumo:
Wireless networks rapidly became a fundamental pillar of everyday activities. Whether at work or elsewhere, people often benefits from always-on connections. This trend is likely to increase, and hence actual technologies struggle to cope with the increase in traffic demand. To this end, Cognitive Wireless Networks have been studied. These networks aim at a better utilization of the spectrum, by understanding the environment in which they operate, and adapt accordingly. In particular recently national regulators opened up consultations on the opportunistic use of the TV bands, which became partially free due to the digital TV switch over. In this work, we focus on the indoor use of of TVWS. Interesting use cases like smart metering and WiFI like connectivity arise, and are studied and compared against state of the art technology. New measurements for TVWS networks will be presented and evaluated, and fundamental characteristics of the signal derived. Then, building on that, a new model of spectrum sharing, which takes into account also the height from the terrain, is presented and evaluated in a real scenario. The principal limits and performance of TVWS operated networks will be studied for two main use cases, namely Machine to Machine communication and for wireless sensor networks, particularly for the smart grid scenario. The outcome is that TVWS are certainly interesting to be studied and deployed, in particular when used as an additional offload for other wireless technologies. Seeing TVWS as the only wireless technology on a device is harder to be seen: the uncertainity in channel availability is the major drawback of opportunistic networks, since depending on the primary network channel allocation might lead in having no channels available for communication. TVWS can be effectively exploited as offloading solutions, and most of the contributions presented in this work proceed in this direction.
Resumo:
L’obiettivo del progetto di tesi svolto è quello di realizzare un servizio di livello middleware dedicato ai dispositivi mobili che sia in grado di fornire il supporto per l’offloading di codice verso una infrastruttura cloud. In particolare il progetto si concentra sulla migrazione di codice verso macchine virtuali dedicate al singolo utente. Il sistema operativo delle VMs è lo stesso utilizzato dal device mobile. Come i precedenti lavori sul computation offloading, il progetto di tesi deve garantire migliori performance in termini di tempo di esecuzione e utilizzo della batteria del dispositivo. In particolare l’obiettivo più ampio è quello di adattare il principio di computation offloading a un contesto di sistemi distribuiti mobili, migliorando non solo le performance del singolo device, ma l’esecuzione stessa dell’applicazione distribuita. Questo viene fatto tramite una gestione dinamica delle decisioni di offloading basata, non solo, sullo stato del device, ma anche sulla volontà e/o sullo stato degli altri utenti appartenenti allo stesso gruppo. Per esempio, un primo utente potrebbe influenzare le decisioni degli altri membri del gruppo specificando una determinata richiesta, come alta qualità delle informazioni, risposta rapida o basata su altre informazioni di alto livello. Il sistema fornisce ai programmatori un semplice strumento di definizione per poter creare nuove policy personalizzate e, quindi, specificare nuove regole di offloading. Per rendere il progetto accessibile ad un più ampio numero di sviluppatori gli strumenti forniti sono semplici e non richiedono specifiche conoscenze sulla tecnologia. Il sistema è stato poi testato per verificare le sue performance in termini di mecchanismi di offloading semplici. Successivamente, esso è stato anche sottoposto a dei test per verificare che la selezione di differenti policy, definite dal programmatore, portasse realmente a una ottimizzazione del parametro designato.
Resumo:
Various applications for the purposes of event detection, localization, and monitoring can benefit from the use of wireless sensor networks (WSNs). Wireless sensor networks are generally easy to deploy, with flexible topology and can support diversity of tasks thanks to the large variety of sensors that can be attached to the wireless sensor nodes. To guarantee the efficient operation of such a heterogeneous wireless sensor networks during its lifetime an appropriate management is necessary. Typically, there are three management tasks, namely monitoring, (re) configuration, and code updating. On the one hand, status information, such as battery state and node connectivity, of both the wireless sensor network and the sensor nodes has to be monitored. And on the other hand, sensor nodes have to be (re)configured, e.g., setting the sensing interval. Most importantly, new applications have to be deployed as well as bug fixes have to be applied during the network lifetime. All management tasks have to be performed in a reliable, time- and energy-efficient manner. The ability to disseminate data from one sender to multiple receivers in a reliable, time- and energy-efficient manner is critical for the execution of the management tasks, especially for code updating. Using multicast communication in wireless sensor networks is an efficient way to handle such traffic pattern. Due to the nature of code updates a multicast protocol has to support bulky traffic and endto-end reliability. Further, the limited resources of wireless sensor nodes demand an energy-efficient operation of the multicast protocol. Current data dissemination schemes do not fulfil all of the above requirements. In order to close the gap, we designed the Sensor Node Overlay Multicast (SNOMC) protocol such that to support a reliable, time-efficient and energy-efficient dissemination of data from one sender node to multiple receivers. In contrast to other multicast transport protocols, which do not support reliability mechanisms, SNOMC supports end-to-end reliability using a NACK-based reliability mechanism. The mechanism is simple and easy to implement and can significantly reduce the number of transmissions. It is complemented by a data acknowledgement after successful reception of all data fragments by the receiver nodes. In SNOMC three different caching strategies are integrated for an efficient handling of necessary retransmissions, namely, caching on each intermediate node, caching on branching nodes, or caching only on the sender node. Moreover, an option was included to pro-actively request missing fragments. SNOMC was evaluated both in the OMNeT++ simulator and in our in-house real-world testbed and compared to a number of common data dissemination protocols, such as Flooding, MPR, TinyCubus, PSFQ, and both UDP and TCP. The results showed that SNOMC outperforms the selected protocols in terms of transmission time, number of transmitted packets, and energy-consumption. Moreover, we showed that SNOMC performs well with different underlying MAC protocols, which support different levels of reliability and energy-efficiency. Thus, SNOMC can offer a robust, high-performing solution for the efficient distribution of code updates and management information in a wireless sensor network. To address the three management tasks, in this thesis we developed the Management Architecture for Wireless Sensor Networks (MARWIS). MARWIS is specifically designed for the management of heterogeneous wireless sensor networks. A distinguished feature of its design is the use of wireless mesh nodes as backbone, which enables diverse communication platforms and offloading functionality from the sensor nodes to the mesh nodes. This hierarchical architecture allows for efficient operation of the management tasks, due to the organisation of the sensor nodes into small sub-networks each managed by a mesh node. Furthermore, we developed a intuitive -based graphical user interface, which allows non-expert users to easily perform management tasks in the network. In contrast to other management frameworks, such as Mate, MANNA, TinyCubus, or code dissemination protocols, such as Impala, Trickle, and Deluge, MARWIS offers an integrated solution monitoring, configuration and code updating of sensor nodes. Integration of SNOMC into MARWIS further increases performance efficiency of the management tasks. To our knowledge, our approach is the first one, which offers a combination of a management architecture with an efficient overlay multicast transport protocol. This combination of SNOMC and MARWIS supports reliably, time- and energy-efficient operation of a heterogeneous wireless sensor network.
Resumo:
This study analyses the contradictory effects of decentralisation on public spending. We distinguish three dimensions of decentralisation and analyse their joint and separate effects on public spending in the Swiss cantons over 20 years. We find that overall decentralisation has a strong, significant and negative effect on the size of the public sector, thus confirming the Leviathan hypothesis. The same holds for fiscal and institutional decentralisation. However, the extent to which political processes and actors are organised locally rather than centrally actually increases central and decreases local spending. This suggests that actors behave strategically when dealing with the centre by offloading the more costly policies. The wider implication of our study is that the balance between self-rule and shared rule has implications also for the size of the overall political system.
Resumo:
Este proyecto consiste en el dimensionamiento del proceso de licuación de una planta offshore para la producción de gas natural licuado, usando únicamente N2 como refrigerante, evitando de este modo riesgos potenciales que podrían surgir con el uso de refrigerantes mixtos compuestos de hidrocarburos. El proceso ha sido diseñado para acomodar 35,23 kg/s (aproximadamente un millón de toneladas por año) de gas natural seco, sin separación de gases licuados de petróleo (GLP) y ajustarlo dentro de los parámetros requeridos en las especificaciones del proceso. Para proceder al dimensionamiento del proceso de licuación de gas natural de la planta se ha empleado el programa Aspen Plus. Los sistemas floating production, storage and offloading para licuar el gas natural (LNG-FPSO), es una nueva unidad conceptual y un modo realista y efectivo para la explotación, recuperación, almacenamiento, transporte y agotamiento de los campos marginales de gas y las fuentes de gas asociadas offshore. En el proyecto se detalla el proceso, equipos necesarios y costes estimados, potencia aproximada requerida y un breve análisis económico. ABSTRACT This project consist of the dimensioning of a liquefaction process in an offshore plant to produce liquefied natural, using only N2 as refrigerant in the cooling cycles to avoid potential hazards of mixed hydrocarbon refrigerants. The process was designed to accommodate 35.23 kg/s (roughly 1 MTPA) of raw natural gas feed without separation of LPG, and fits within all parameters required in the process specifications. The plant has been designed with the computer tool Aspen Plus. The floating production, storage and offloading system for liquefied natural gas (LNGFPSO), is a new conceptual unit and an effective and realistic way for exploitation, recovery, storage, transportation and end-use applications of marginal gas fields and offshore associated-gas resources. The following report details the process, equipment needs and estimated costs, approximated power requirements, and a brief economic analysis.
Resumo:
Hoje em dia com o crescente aumento da exploração de petróleo e gás em águas profundas, há um aumento na demanda por operações offshore envolvendo a cooperação entre unidades flutuantes. Tais operações requerem um alto nível de planejamento e coordenação, o que na maioria dos casos é feito com a troca de informação no nível de operação, com cada unidade flutuante comandada independentemente. Exemplos de operações deste tipo vão desde operações de alívio passando por operações de instalação de equipamento submarino, até operações de pesquisa envolvendo múltiplas unidades flutuantes dotadas de sistema de posicionamento dinâmico (DP). As vantagens do controle cooperativo surgem com a redução do erro da distância relativa durante a manutenção do posicionamento ou durante a execução de manobras de posicionamento conjuntas. No presente trabalho, os conceitos de controle de consenso são aplicados de forma combinada com o sistema DP de cada navio. A influência dos ganhos do controlador cooperativo no sistema como um todo será discutida, utilizando-se técnicas de análise da resposta em frequência. Simulações completas no domínio do tempo e experimentos usando modelos em escala serão utilizados para se demonstrar o funcionamento do controle cooperativo. Todas as simulações serão conduzidas no simulador Dynasim e os ensaios experimentais no tanque de provas da Engenharia Naval da Escola Politécnica da Universidade de São Paulo. Além disso, serão feitas comparações entre os experimentos em tanque de provas e simulações numéricas equivalentes, demonstrando-se a validade dos ensaios numéricos. Será também demonstrado que os requisitos de projetos adotados são atendidos pelos ensaios em tanque de provas. .
Resumo:
O conceito de controle híbrido é aplicado à operação de alívio entre um FPWSO e um navio aliviador. Ambos os navios mantêm suas posições e aproamentos pelo resultado da ação do seu Sistema de Posicionamento Dinâmico (SPD). O alívio dura cerca de 24 horas para ser concluído. Durante este período, o estado de mar pode se alterar e os calados estão sendo constantemente alterados. Um controlador híbrido é projetado para permitir modificacões dos parâmetros de controle/observação se alguma alteração significante do estado de mar e/ou calado das embarcações ocorrer. O principal objetivo dos controladores é manter o posicionamento relativo entre os navios com o intuito de evitar perigosa proximidade ou excesso de tensão no cabo. Com isto em mente, uma nova estratégia de controle que atue integradamente em ambos os navios é proposta baseda em geometria diferencial. Observadores não lineares baseados em passividade são aplicados para estimar a posição, a velocidade e as forças externas de mares calmos até extremos. O critério para troca do controle/observação é baseado na variação do calado e no estado de mar. O calado é assumido conhecido e o estado de mar é estimado pela frequência de pico do espectro do movimento de primeira ordem dos navios. Um modelo de perturbação é proposto para encontrar o número de controladores do sistema híbrido. A equivalência entre o controle geométrico e o controlador baseado em Multiplicadores de Lagrange é demonstrada. Assumindo algumas hipóteses, a equivalência entre os controladores geométrico e o PD é também apresentada. O desempenho da nova estratégia é avaliada por meio de simulações numéricas e comparada a um controlador PD. Os resultados apresentam muito bom desempenho em função do objetivo proposto. A comparação entre a abordagem geométrica e o controlador PD aponta um desempenho muito parecido entre eles.
Resumo:
This thesis presents the formal definition of a novel Mobile Cloud Computing (MCC) extension of the Networked Autonomic Machine (NAM) framework, a general-purpose conceptual tool which describes large-scale distributed autonomic systems. The introduction of autonomic policies in the MCC paradigm has proved to be an effective technique to increase the robustness and flexibility of MCC systems. In particular, autonomic policies based on continuous resource and connectivity monitoring help automate context-aware decisions for computation offloading. We have also provided NAM with a formalization in terms of a transformational operational semantics in order to fill the gap between its existing Java implementation NAM4J and its conceptual definition. Moreover, we have extended NAM4J by adding several components with the purpose of managing large scale autonomic distributed environments. In particular, the middleware allows for the implementation of peer-to-peer (P2P) networks of NAM nodes. Moreover, NAM mobility actions have been implemented to enable the migration of code, execution state and data. Within NAM4J, we have designed and developed a component, denoted as context bus, which is particularly useful in collaborative applications in that, if replicated on each peer, it instantiates a virtual shared channel allowing nodes to notify and get notified about context events. Regarding the autonomic policies management, we have provided NAM4J with a rule engine, whose purpose is to allow a system to autonomously determine when offloading is convenient. We have also provided NAM4J with trust and reputation management mechanisms to make the middleware suitable for applications in which such aspects are of great interest. To this purpose, we have designed and implemented a distributed framework, denoted as DARTSense, where no central server is required, as reputation values are stored and updated by participants in a subjective fashion. We have also investigated the literature regarding MCC systems. The analysis pointed out that all MCC models focus on mobile devices, and consider the Cloud as a system with unlimited resources. To contribute in filling this gap, we defined a modeling and simulation framework for the design and analysis of MCC systems, encompassing both their sides. We have also implemented a modular and reusable simulator of the model. We have applied the NAM principles to two different application scenarios. First, we have defined a hybrid P2P/cloud approach where components and protocols are autonomically configured according to specific target goals, such as cost-effectiveness, reliability and availability. Merging P2P and cloud paradigms brings together the advantages of both: high availability, provided by the Cloud presence, and low cost, by exploiting inexpensive peers resources. As an example, we have shown how the proposed approach can be used to design NAM-based collaborative storage systems based on an autonomic policy to decide how to distribute data chunks among peers and Cloud, according to cost minimization and data availability goals. As a second application, we have defined an autonomic architecture for decentralized urban participatory sensing (UPS) which bridges sensor networks and mobile systems to improve effectiveness and efficiency. The developed application allows users to retrieve and publish different types of sensed information by using the features provided by NAM4J's context bus. Trust and reputation is managed through the application of DARTSense mechanisms. Also, the application includes an autonomic policy that detects areas characterized by few contributors, and tries to recruit new providers by migrating code necessary to sensing, through NAM mobility actions.
Resumo:
This research is focused on the optimisation of resource utilisation in wireless mobile networks with the consideration of the users’ experienced quality of video streaming services. The study specifically considers the new generation of mobile communication networks, i.e. 4G-LTE, as the main research context. The background study provides an overview of the main properties of the relevant technologies investigated. These include video streaming protocols and networks, video service quality assessment methods, the infrastructure and related functionalities of LTE, and resource allocation algorithms in mobile communication systems. A mathematical model based on an objective and no-reference quality assessment metric for video streaming, namely Pause Intensity, is developed in this work for the evaluation of the continuity of streaming services. The analytical model is verified by extensive simulation and subjective testing on the joint impairment effects of the pause duration and pause frequency. Various types of the video contents and different levels of the impairments have been used in the process of validation tests. It has been shown that Pause Intensity is closely correlated with the subjective quality measurement in terms of the Mean Opinion Score and this correlation property is content independent. Based on the Pause Intensity metric, an optimised resource allocation approach is proposed for the given user requirements, communication system specifications and network performances. This approach concerns both system efficiency and fairness when establishing appropriate resource allocation algorithms, together with the consideration of the correlation between the required and allocated data rates per user. Pause Intensity plays a key role here, representing the required level of Quality of Experience (QoE) to ensure the best balance between system efficiency and fairness. The 3GPP Long Term Evolution (LTE) system is used as the main application environment where the proposed research framework is examined and the results are compared with existing scheduling methods on the achievable fairness, efficiency and correlation. Adaptive video streaming technologies are also investigated and combined with our initiatives on determining the distribution of QoE performance across the network. The resulting scheduling process is controlled through the prioritization of users by considering their perceived quality for the services received. Meanwhile, a trade-off between fairness and efficiency is maintained through an online adjustment of the scheduler’s parameters. Furthermore, Pause Intensity is applied to act as a regulator to realise the rate adaptation function during the end user’s playback of the adaptive streaming service. The adaptive rates under various channel conditions and the shape of the QoE distribution amongst the users for different scheduling policies have been demonstrated in the context of LTE. Finally, the work for interworking between mobile communication system at the macro-cell level and the different deployments of WiFi technologies throughout the macro-cell is presented. A QoEdriven approach is proposed to analyse the offloading mechanism of the user’s data (e.g. video traffic) while the new rate distribution algorithm reshapes the network capacity across the macrocell. The scheduling policy derived is used to regulate the performance of the resource allocation across the fair-efficient spectrum. The associated offloading mechanism can properly control the number of the users within the coverages of the macro-cell base station and each of the WiFi access points involved. The performance of the non-seamless and user-controlled mobile traffic offloading (through the mobile WiFi devices) has been evaluated and compared with that of the standard operator-controlled WiFi hotspots.