8 resultados para Mobile Video
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
The development and evaluation of new algorithms and protocols for Wireless Multimedia Sensor Networks (WMSNs) are usually supported by means of a discrete event network simulator, where OMNeT++ is one of the most important ones. However, experiments involving multimedia transmission, video flows with different characteristics, genres, group of pictures lengths, and coding techniques must be evaluated based also on Quality of Experience (QoE) metrics to reflect the user's perception. Such experiments require the evaluation of video-related information, i.e., frame type, received/lost, delay, jitter, decoding errors, as well as inter and intra-frame dependency of received/distorted videos. However, existing OMNeT++ frameworks for WMSNs do not support video transmissions with QoE-awareness, neither a large set of mobility traces to enable evaluations under different multimedia/mobile situations. In this paper, we propose a Mobile MultiMedia Wireless Sensor Network OMNeT++ framework (M3WSN) to support transmission, control and evaluation of real video sequences in mobile WMSNs.
Resumo:
Wireless mobile sensor networks are enlarging the Internet of Things (IoT) portfolio with a huge number of multimedia services for smart cities. Safety and environmental monitoring multimedia applications will be part of the Smart IoT systems, which aim to reduce emergency response time, while also predicting hazardous events. In these mobile and dynamic (possible disaster) scenarios, opportunistic routing allows routing decisions in a completely distributed manner, by using a hop- by-hop route decision based on protocol-specific characteristics, and a predefined end-to-end path is not a reliable solution. This enables the transmission of video flows of a monitored area/object with Quality of Experience (QoE) support to users, headquarters or IoT platforms. However, existing approaches rely on a single metric to make the candidate selection rule, including link quality or geographic information, which causes a high packet loss rate, and reduces the video perception from the human standpoint. This article proposes a cross-layer Link quality and Geographical-aware Opportunistic routing protocol (LinGO), which is designed for video dissemination in mobile multimedia IoT environments. LinGO improves routing decisions using multiple metrics, including link quality, geographic loca- tion, and energy. The simulation results show the benefits of LinGO compared with well-known routing solutions for video transmission with QoE support in mobile scenarios.
Resumo:
Mobile multimedia ad hoc services run on dynamic topologies due to node mobility or failures and wireless channel impairments. A robust routing service must adapt to topology changes with the aim of recovering or maintaining the video quality level and reducing the impact of the user's experience. In those scenarios, beacon-less Opportunistic Routing (OR) increases the robustness by supporting routing decisions in a completely distributed manner based on protocol-specific characteristics. However, the existing beacon-less OR approaches do not efficiently combine multiple metrics for forwarding selection, which cause higher packet loss rate, and consequently reduce the video quality level. In this paper, we assess the robustness and reliability of our recently developed OR protocol under node failures, called cross-layer Link quality and Geographical-aware OR protocol (LinGO). Simulation results show that LinGO achieves multimedia dissemination with QoE support and robustness in scenarios with dynamic topologies.
Resumo:
A reliable and robust routing service for Flying Ad-Hoc Networks (FANETs) must be able to adapt to topology changes. User experience on watching live video sequences must also be satisfactory even in scenarios with buffer overflow and high packet loss ratio. In this paper, we introduce a Cross-layer Link quality and Geographical-aware beaconless opportunistic routing protocol (XLinGO). It enhances the transmission of simultaneous multiple video flows over FANETs by creating and keeping reliable persistent multi-hop routes. XLinGO considers a set of cross-layer and human-related information for routing decisions, as performance metrics and Quality of Experience (QoE). Performance evaluation shows that XLinGO achieves multimedia dissemination with QoE support and robustness in a multi-hop, multi-flow, and mobile network environments.
Resumo:
A reliable and robust routing service for Flying Ad-Hoc Networks (FANETs) must be able to adapt to topology changes, and also to recover the quality level of the delivered multiple video flows under dynamic network topologies. The user experience on watching live videos must also be satisfactory even in scenarios with network congestion, buffer overflow, and packet loss ratio, as experienced in many FANET multimedia applications. In this paper, we perform a comparative simulation study to assess the robustness, reliability, and quality level of videos transmitted via well-known beaconless opportunistic routing protocols. Simulation results shows that our developed protocol XLinGO achieves multimedia dissemination with Quality of Experience (QoE) support and robustness in a multi-hop, multi-flow, and mobile networks, as required in many multimedia FANET scenarios.
Resumo:
The user experience on watching live video se- quences transmitted over a Flying Ad-Hoc Networks (FANETs) must be considered to drop packets in overloaded queues, in scenarios with high buffer overflow and packet loss rate. In this paper, we introduce a context-aware adaptation mechanism to manage overloaded buffers. More specifically, we propose a utility function to compute the dropping probability of each packet in overloaded queues based on video context information, such as frame importance, packet deadline, and sensing relevance. In this way, the proposed mechanism drops the packet that adds the minimum video distortion. Simulation evaluation shows that the proposed adaptation mechanism provides real-time multimedia dissemination with QoE support in a multi-hop, multi-flow, and mobile network environments.
Resumo:
In this work, we will give a detailed tutorial instruction about how to use the Mobile Multi-Media Wireless Sensor Networks (M3WSN) simulation framework. The M3WSN framework has been published as a scientific paper in the 6th International Workshop on OMNeT++ (2013) [1]. M3WSN framework enables the multimedia transmission of real video se- quence. Therefore, a set of multimedia algorithms, protocols, and services can be evaluated by using QoE metrics. Moreover, key video-related information, such as frame types, GoP length and intra-frame dependency can be used for creating new assessment and optimization solutions. To support mobility, M3WSN utilizes different mobility traces to enable the understanding of how the network behaves under mobile situations. This tutorial will cover how to install and configure the M3WSN framework, setting and running the experiments, creating mobility and video traces, and how to evaluate the performance of different protocols. The tutorial will be given in an environment of Ubuntu 12.04 LTS and OMNeT++ 4.2.