846 resultados para Distributed coding


Relevância:

70.00% 70.00%

Publicador:

Resumo:

The growing heterogeneity of networks, devices and consumption conditions asks for flexible and adaptive video coding solutions. The compression power of the HEVC standard and the benefits of the distributed video coding paradigm allow designing novel scalable coding solutions with improved error robustness and low encoding complexity while still achieving competitive compression efficiency. In this context, this paper proposes a novel scalable video coding scheme using a HEVC Intra compliant base layer and a distributed coding approach in the enhancement layers (EL). This design inherits the HEVC compression efficiency while providing low encoding complexity at the enhancement layers. The temporal correlation is exploited at the decoder to create the EL side information (SI) residue, an estimation of the original residue. The EL encoder sends only the data that cannot be inferred at the decoder, thus exploiting the correlation between the original and SI residues; however, this correlation must be characterized with an accurate correlation model to obtain coding efficiency improvements. Therefore, this paper proposes a correlation modeling solution to be used at both encoder and decoder, without requiring a feedback channel. Experiments results confirm that the proposed scalable coding scheme has lower encoding complexity and provides BD-Rate savings up to 3.43% in comparison with the HEVC Intra scalable extension under development. © 2014 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Spatial hearing refers to a set of abilities enabling us to determine the location of sound sources, redirect our attention toward relevant acoustic events, and recognize separate sound sources in noisy environments. Determining the location of sound sources plays a key role in the way in which humans perceive and interact with their environment. Deficits in sound localization abilities are observed after lesions to the neural tissues supporting these functions and can result in serious handicaps in everyday life. These deficits can, however, be remediated (at least to a certain degree) by the surprising capacity of reorganization that the human brain possesses following damage and/or learning, namely, the brain plasticity. In this thesis, our aim was to investigate the functional organization of auditory spatial functions and the learning-induced plasticity of these functions. Overall, we describe the results of three studies. The first study entitled "The role of the right parietal cortex in sound localization: A chronometric single pulse transcranial magnetic stimulation study" (At et al., 2011), study A, investigated the role of the right parietal cortex in spatial functions and its chronometry (i.e. the critical time window of its contribution to sound localizations). We concentrated on the behavioral changes produced by the temporarily inactivation of the parietal cortex with transcranial magnetic stimulation (TMS). We found that the integrity of the right parietal cortex is crucial for localizing sounds in the space and determined a critical time window of its involvement, suggesting a right parietal dominance for auditory spatial discrimination in both hemispaces. In "Distributed coding of the auditory space in man: evidence from training-induced plasticity" (At et al., 2013a), study B, we investigated the neurophysiological correlates and changes of the different sub-parties of the right auditory hemispace induced by a multi-day auditory spatial training in healthy subjects with electroencephalography (EEG). We report a distributed coding for sound locations over numerous auditory regions, particular auditory areas code specifically for precise parts of the auditory space, and this specificity for a distinct region is enhanced with training. In the third study "Training-induced changes in auditory spatial mismatch negativity" (At et al., 2013b), study C, we investigated the pre-attentive neurophysiological changes induced with a training over 4 days in healthy subjects with a passive mismatch negativity (MMN) paradigm. We showed that training changed the mechanisms for the relative representation of sound positions and not the specific lateralization themselves and that it changed the coding in right parahippocampal regions. - L'audition spatiale désigne notre capacité à localiser des sources sonores dans l'espace, de diriger notre attention vers les événements acoustiques pertinents et de reconnaître des sources sonores appartenant à des objets distincts dans un environnement bruyant. La localisation des sources sonores joue un rôle important dans la façon dont les humains perçoivent et interagissent avec leur environnement. Des déficits dans la localisation de sons sont souvent observés quand les réseaux neuronaux impliqués dans cette fonction sont endommagés. Ces déficits peuvent handicaper sévèrement les patients dans leur vie de tous les jours. Cependant, ces déficits peuvent (au moins à un certain degré) être réhabilités grâce à la plasticité cérébrale, la capacité du cerveau humain à se réorganiser après des lésions ou un apprentissage. L'objectif de cette thèse était d'étudier l'organisation fonctionnelle de l'audition spatiale et la plasticité induite par l'apprentissage de ces fonctions. Dans la première étude intitulé « The role of the right parietal cortex in sound localization : A chronometric single pulse study » (At et al., 2011), étude A, nous avons examiné le rôle du cortex pariétal droit dans l'audition spatiale et sa chronométrie, c'est-à- dire le moment critique de son intervention dans la localisation de sons. Nous nous sommes concentrés sur les changements comportementaux induits par l'inactivation temporaire du cortex pariétal droit par le biais de la Stimulation Transcrânienne Magnétique (TMS). Nous avons démontré que l'intégrité du cortex pariétal droit est cruciale pour localiser des sons dans l'espace. Nous avons aussi défini le moment critique de l'intervention de cette structure. Dans « Distributed coding of the auditory space : evidence from training-induced plasticity » (At et al., 2013a), étude B, nous avons examiné la plasticité cérébrale induite par un entraînement des capacités de discrimination auditive spatiale de plusieurs jours. Nous avons montré que le codage des positions spatiales est distribué dans de nombreuses régions auditives, que des aires auditives spécifiques codent pour des parties données de l'espace et que cette spécificité pour des régions distinctes est augmentée par l'entraînement. Dans « Training-induced changes in auditory spatial mismatch negativity » (At et al., 2013b), étude C, nous avons examiné les changements neurophysiologiques pré- attentionnels induits par un entraînement de quatre jours. Nous avons montré que l'entraînement modifie la représentation des positions spatiales entraînées et non-entrainées, et que le codage de ces positions est modifié dans des régions parahippocampales.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

En este proyecto se hace un análisis en profundidad de las técnicas de ataque a las redes de ordenadores conocidas como APTs (Advanced Persistent Threats), viendo cuál es el impacto que pueden llegar a tener en los equipos de una empresa y el posible robo de información y pérdida monetaria que puede llevar asociada. Para hacer esta introspección veremos qué técnicas utilizan los atacantes para introducir el malware en la red y también cómo dicho malware escala privilegios, obtiene información privilegiada y se mantiene oculto. Además, y cómo parte experimental de este proyecto se ha desarrollado una plataforma para la detección de malware de una red en base a las webs, URLs e IPs que visitan los nodos que la componen. Obtendremos esta visión gracias a la extracción de los logs y registros de DNS de consulta de la compañía, sobre los que realizaremos un análisis exhaustivo. Para poder inferir correctamente qué equipos están infectados o no se ha utilizado un algoritmo de desarrollo propio inspirado en la técnica Belief Propagation (“Propagación basada en creencia”) que ya ha sido usada antes por desarrolladores cómo los de los Álamos en Nuevo México (Estados Unidos) para fines similares a los que aquí se muestran. Además, para mejorar la velocidad de inferencia y el rendimiento del sistema se propone un algoritmo adaptado a la plataforma Hadoop de Apache, por lo que se modifica el paradigma de programación habitual y se busca un nuevo paradigma conocido como MapReduce que consiste en la división de la información en conceptos clave-valor. Por una parte, los algoritmos que existen basados en Belief Propagation para el descubrimiento de malware son propietarios y no han sido publicados completamente hasta la fecha, por otra parte, estos algoritmos aún no han sido adaptados a Hadoop ni a ningún modelo de programación distribuida aspecto que se abordará en este proyecto. No es propósito de este proyecto desarrollar una plataforma comercial o funcionalmente completa, sino estudiar el problema de las APTs y una implementación que demuestre que la plataforma mencionada es factible de implementar. Este proyecto abre, a su vez, un horizonte nuevo de investigación en el campo de la adaptación al modelo MapReduce de algoritmos del tipo Belief Propagation basados en la detección del malware mediante registros DNS. ABSTRACT. This project makes an in-depth investigation about problems related to APT in computer networks nowadays, seeing how much damage could they inflict on the hosts of a Company and how much monetary and information loss may they cause. In our investigation we will find what techniques are generally applied by attackers to inject malware into networks and how this malware escalates its privileges, extracts privileged information and stays hidden. As the main part of this Project, this paper shows how to develop and configure a platform that could detect malware from URLs and IPs visited by the hosts of the network. This information can be extracted from the logs and DNS query records of the Company, on which we will make an analysis in depth. A self-developed algorithm inspired on Belief Propagation technique has been used to infer which hosts are infected and which are not. This technique has been used before by developers of Los Alamos Lab (New Mexico, USA) for similar purposes. Moreover, this project proposes an algorithm adapted to Apache Hadoop Platform in order to improve the inference speed and system performance. This platform replaces the traditional coding paradigm by a new paradigm called MapReduce which splits and shares information among hosts and uses key-value tokens. On the one hand, existing algorithms based on Belief Propagation are part of owner software and they have not been published yet because they have been patented due to the huge economic benefits they could give. On the other hand these algorithms have neither been adapted to Hadoop nor to other distributed coding paradigms. This situation turn the challenge into a complicated problem and could lead to a dramatic increase of its installation difficulty on a client corporation. The purpose of this Project is to develop a complete and 100% functional brand platform. Herein, show a short summary of the APT problem will be presented and make an effort will be made to demonstrate the viability of an APT discovering platform. At the same time, this project opens up new horizons of investigation about adapting Belief Propagation algorithms to the MapReduce model and about malware detection with DNS records.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The visual world is presented to the brain through patterns of action potentials in the population of optic nerve fibers. Single-neuron recordings show that each retinal ganglion cell has a spatially restricted receptive field, a limited integration time, and a characteristic spectral sensitivity. Collectively, these response properties define the visual message conveyed by that neuron's action potentials. Since the size of the optic nerve is strictly constrained, one expects the retina to generate a highly efficient representation of the visual scene. By contrast, the receptive fields of nearby ganglion cells often overlap, suggesting great redundancy among the retinal output signals. Recent multineuron recordings may help resolve this paradox. They reveal concerted firing patterns among ganglion cells, in which small groups of nearby neurons fire synchronously with delays of only a few milliseconds. As there are many more such firing patterns than ganglion cells, such a distributed code might allow the retina to compress a large number of distinct visual messages into a small number of optic nerve fibers. This paper will review the evidence for a distributed coding scheme in the retinal output. The performance limits of such codes are analyzed with simple examples, illustrating that they allow a powerful trade-off between spatial and temporal resolution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A wireless mesh network is a mesh network implemented over a wireless network system such as wireless LANs. Wireless Mesh Networks(WMNs) are promising for numerous applications such as broadband home networking, enterprise networking, transportation systems, health and medical systems, security surveillance systems, etc. Therefore, it has received considerable attention from both industrial and academic researchers. This dissertation explores schemes for resource management and optimization in WMNs by means of network routing and network coding.^ In this dissertation, we propose three optimization schemes. (1) First, a triple-tier optimization scheme is proposed for load balancing objective. The first tier mechanism achieves long-term routing optimization, and the second tier mechanism, using the optimization results obtained from the first tier mechanism, performs the short-term adaptation to deal with the impact of dynamic channel conditions. A greedy sub-channel allocation algorithm is developed as the third tier optimization scheme to further reduce the congestion level in the network. We conduct thorough theoretical analysis to show the correctness of our design and give the properties of our scheme. (2) Then, a Relay-Aided Network Coding scheme called RANC is proposed to improve the performance gain of network coding by exploiting the physical layer multi-rate capability in WMNs. We conduct rigorous analysis to find the design principles and study the tradeoff in the performance gain of RANC. Based on the analytical results, we provide a practical solution by decomposing the original design problem into two sub-problems, flow partition problem and scheduling problem. (3) Lastly, a joint optimization scheme of the routing in the network layer and network coding-aware scheduling in the MAC layer is introduced. We formulate the network optimization problem and exploit the structure of the problem via dual decomposition. We find that the original problem is composed of two problems, routing problem in the network layer and scheduling problem in the MAC layer. These two sub-problems are coupled through the link capacities. We solve the routing problem by two different adaptive routing algorithms. We then provide a distributed coding-aware scheduling algorithm. According to corresponding experiment results, the proposed schemes can significantly improve network performance.^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

One of the most efficient approaches to generate the side information (SI) in distributed video codecs is through motion compensated frame interpolation where the current frame is estimated based on past and future reference frames. However, this approach leads to significant spatial and temporal variations in the correlation noise between the source at the encoder and the SI at the decoder. In such scenario, it would be useful to design an architecture where the SI can be more robustly generated at the block level, avoiding the creation of SI frame regions with lower correlation, largely responsible for some coding efficiency losses. In this paper, a flexible framework to generate SI at the block level in two modes is presented: while the first mode corresponds to a motion compensated interpolation (MCI) technique, the second mode corresponds to a motion compensated quality enhancement (MCQE) technique where a low quality Intra block sent by the encoder is used to generate the SI by doing motion estimation with the help of the reference frames. The novel MCQE mode can be overall advantageous from the rate-distortion point of view, even if some rate has to be invested in the low quality Intra coding blocks, for blocks where the MCI produces SI with lower correlation. The overall solution is evaluated in terms of RD performance with improvements up to 2 dB, especially for high motion video sequences and long Group of Pictures (GOP) sizes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Motion compensated frame interpolation (MCFI) is one of the most efficient solutions to generate side information (SI) in the context of distributed video coding. However, it creates SI with rather significant motion compensated errors for some frame regions while rather small for some other regions depending on the video content. In this paper, a low complexity Infra mode selection algorithm is proposed to select the most 'critical' blocks in the WZ frame and help the decoder with some reliable data for those blocks. For each block, the novel coding mode selection algorithm estimates the encoding rate for the Intra based and WZ coding modes and determines the best coding mode while maintaining a low encoder complexity. The proposed solution is evaluated in terms of rate-distortion performance with improvements up to 1.2 dB regarding a WZ coding mode only solution.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The advances made in channel-capacity codes, such as turbo codes and low-density parity-check (LDPC) codes, have played a major role in the emerging distributed source coding paradigm. LDPC codes can be easily adapted to new source coding strategies due to their natural representation as bipartite graphs and the use of quasi-optimal decoding algorithms, such as belief propagation. This paper tackles a relevant scenario in distributedvideo coding: lossy source coding when multiple side information (SI) hypotheses are available at the decoder, each one correlated with the source according to different correlation noise channels. Thus, it is proposed to exploit multiple SI hypotheses through an efficient joint decoding technique withmultiple LDPC syndrome decoders that exchange information to obtain coding efficiency improvements. At the decoder side, the multiple SI hypotheses are created with motion compensated frame interpolation and fused together in a novel iterative LDPC based Slepian-Wolf decoding algorithm. With the creation of multiple SI hypotheses and the proposed decoding algorithm, bitrate savings up to 8.0% are obtained for similar decoded quality.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In distributed video coding, motion estimation is typically performed at the decoder to generate the side information, increasing the decoder complexity while providing low complexity encoding in comparison with predictive video coding. Motion estimation can be performed once to create the side information or several times to refine the side information quality along the decoding process. In this paper, motion estimation is performed at the decoder side to generate multiple side information hypotheses which are adaptively and dynamically combined, whenever additional decoded information is available. The proposed iterative side information creation algorithm is inspired in video denoising filters and requires some statistics of the virtual channel between each side information hypothesis and the original data. With the proposed denoising algorithm for side information creation, a RD performance gain up to 1.2 dB is obtained for the same bitrate.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Low-density parity-check (LDPC) codes are nowadays one of the hottest topics in coding theory, notably due to their advantages in terms of bit error rate performance and low complexity. In order to exploit the potential of the Wyner-Ziv coding paradigm, practical distributed video coding (DVC) schemes should use powerful error correcting codes with near-capacity performance. In this paper, new ways to design LDPC codes for the DVC paradigm are proposed and studied. The new LDPC solutions rely on merging parity-check nodes, which corresponds to reduce the number of rows in the parity-check matrix. This allows to change gracefully the compression ratio of the source (DCT coefficient bitplane) according to the correlation between the original and the side information. The proposed LDPC codes reach a good performance for a wide range of source correlations and achieve a better RD performance when compared to the popular turbo codes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In video communication systems, the video signals are typically compressed and sent to the decoder through an error-prone transmission channel that may corrupt the compressed signal, causing the degradation of the final decoded video quality. In this context, it is possible to enhance the error resilience of typical predictive video coding schemes using as inspiration principles and tools from an alternative video coding approach, the so-called Distributed Video Coding (DVC), based on the Distributed Source Coding (DSC) theory. Further improvements in the decoded video quality after error-prone transmission may also be obtained by considering the perceptual relevance of the video content, as distortions occurring in different regions of a picture have a different impact on the user's final experience. In this context, this paper proposes a Perceptually Driven Error Protection (PDEP) video coding solution that enhances the error resilience of a state-of-the-art H.264/AVC predictive video codec using DSC principles and perceptual considerations. To increase the H.264/AVC error resilience performance, the main technical novelties brought by the proposed video coding solution are: (i) design of an improved compressed domain perceptual classification mechanism; (ii) design of an improved transcoding tool for the DSC-based protection mechanism; and (iii) integration of a perceptual classification mechanism in an H.264/AVC compliant codec with a DSC-based error protection mechanism. The performance results obtained show that the proposed PDEP video codec provides a better performing alternative to traditional error protection video coding schemes, notably Forward Error Correction (FEC)-based schemes. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Cooperative transmission can be seen as a "virtual" MIMO system, where themultiple transmit antennas are in fact implemented distributed by the antennas both at the source and the relay terminal. Depending on the system design, diversity/multiplexing gainsare achievable. This design involves the definition of the type of retransmission (incrementalredundancy, repetition coding), the design of the distributed space-time codes, the errorcorrecting scheme, the operation of the relay (decode&forward or amplify&forward) and thenumber of antennas at each terminal. Proposed schemes are evaluated in different conditionsin combination with forward error correcting codes (FEC), both for linear and near-optimum(sphere decoder) receivers, for its possible implementation in downlink high speed packetservices of cellular networks. Results show the benefits of coded cooperation over directtransmission in terms of increased throughput. It is shown that multiplexing gains areobserved even if the mobile station features a single antenna, provided that cell wide reuse of the relay radio resource is possible.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A parallel interference cancellation (PIC) detection scheme is proposed to suppress the impact of imperfect synchronisation. By treating as interference the extra components in the received signal caused by timing misalignment, the PIC detector not only offers much improved performance but also retains a low structural and computational complexity.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper addresses the impact of imperfect synchronisation on D-STBC when combined with incremental relay. To suppress such an impact, a novel detection scheme is proposed, which retains the two key features of the STBC principle: simplicity (i.e. linear computational complexity), and optimality (i.e. maximum likelihood). These two features make the new detector very suitable for low power wireless networks (e.g. sensor networks).