985 resultados para Network coding
Resumo:
One of the most efficient approaches to generate the side information (SI) in distributed video codecs is through motion compensated frame interpolation where the current frame is estimated based on past and future reference frames. However, this approach leads to significant spatial and temporal variations in the correlation noise between the source at the encoder and the SI at the decoder. In such scenario, it would be useful to design an architecture where the SI can be more robustly generated at the block level, avoiding the creation of SI frame regions with lower correlation, largely responsible for some coding efficiency losses. In this paper, a flexible framework to generate SI at the block level in two modes is presented: while the first mode corresponds to a motion compensated interpolation (MCI) technique, the second mode corresponds to a motion compensated quality enhancement (MCQE) technique where a low quality Intra block sent by the encoder is used to generate the SI by doing motion estimation with the help of the reference frames. The novel MCQE mode can be overall advantageous from the rate-distortion point of view, even if some rate has to be invested in the low quality Intra coding blocks, for blocks where the MCI produces SI with lower correlation. The overall solution is evaluated in terms of RD performance with improvements up to 2 dB, especially for high motion video sequences and long Group of Pictures (GOP) sizes.
Resumo:
Motion compensated frame interpolation (MCFI) is one of the most efficient solutions to generate side information (SI) in the context of distributed video coding. However, it creates SI with rather significant motion compensated errors for some frame regions while rather small for some other regions depending on the video content. In this paper, a low complexity Infra mode selection algorithm is proposed to select the most 'critical' blocks in the WZ frame and help the decoder with some reliable data for those blocks. For each block, the novel coding mode selection algorithm estimates the encoding rate for the Intra based and WZ coding modes and determines the best coding mode while maintaining a low encoder complexity. The proposed solution is evaluated in terms of rate-distortion performance with improvements up to 1.2 dB regarding a WZ coding mode only solution.
Resumo:
Wyner-Ziv (WZ) video coding is a particular case of distributed video coding, the recent video coding paradigm based on the Slepian-Wolf and Wyner-Ziv theorems that exploits the source correlation at the decoder and not at the encoder as in predictive video coding. Although many improvements have been done over the last years, the performance of the state-of-the-art WZ video codecs still did not reach the performance of state-of-the-art predictive video codecs, especially for high and complex motion video content. This is also true in terms of subjective image quality mainly because of a considerable amount of blocking artefacts present in the decoded WZ video frames. This paper proposes an adaptive deblocking filter to improve both the subjective and objective qualities of the WZ frames in a transform domain WZ video codec. The proposed filter is an adaptation of the advanced deblocking filter defined in the H.264/AVC (advanced video coding) standard to a WZ video codec. The results obtained confirm the subjective quality improvement and objective quality gains that can go up to 0.63 dB in the overall for sequences with high motion content when large group of pictures are used.
Resumo:
Wyner - Ziv (WZ) video coding is a particular case of distributed video coding (DVC), the recent video coding paradigm based on the Slepian - Wolf and Wyner - Ziv theorems which exploits the source temporal correlation at the decoder and not at the encoder as in predictive video coding. Although some progress has been made in the last years, WZ video coding is still far from the compression performance of predictive video coding, especially for high and complex motion contents. The WZ video codec adopted in this study is based on a transform domain WZ video coding architecture with feedback channel-driven rate control, whose modules have been improved with some recent coding tools. This study proposes a novel motion learning approach to successively improve the rate-distortion (RD) performance of the WZ video codec as the decoding proceeds, making use of the already decoded transform bands to improve the decoding process for the remaining transform bands. The results obtained reveal gains up to 2.3 dB in the RD curves against the performance for the same codec without the proposed motion learning approach for high motion sequences and long group of pictures (GOP) sizes.
Resumo:
As ameaças à segurança da informação, (INFOSEC) atentam contra a perda da respectiva confidencialidade, integridade e disponibilidade, pelo que as organizações são impelidas a implementar políticas de segurança, quer ao nível físico quer ao nível lógico, utilizando mecanismos específicos de defesa. O projecto Network Air Gap Controller (NAGC) foi concebido no sentido de contribuir para as questões da segurança, designadamente daquelas que se relacionam directamente com a transferência de informação entre redes de classificação de segurança diferenciadas ou de sensibilidades distintas, sem requisitos de comunicação em tempo real, e que mereçam um maior empenho nas condições de robustez, de disponibilidade e de controlo. Os organismos que, em razão das atribuições e competências cometidas, necessitam de fazer fluir informação entre este tipo de redes, são por vezes obrigados a realizar a transferência de dados com recurso a um processo manual, efectuado pelo homem e não pela máquina, que envolve dispositivos amovivéis, como sejam o CD, DVD, PEN, discos externos ou switches manuais. Neste processo, vulgarmente designado por Network Air Gap (NAG), o responsável pela transferência de dados deverá assumir de forma infalível, como atribuições intrínsecas e inalienáveis da função exercida, as garantias do cumprimento de um vasto conjunto de normas regulamentares. As regras estabelecidas desdobram-se em ferramentas e procedimentos que se destinam, por exemplo, à guarda em arquivo de todas as transferências efectuadas; à utilização de ferramentas de segurança (ex: antivírus) antes da colocação da informação na rede de classificação mais elevada; ao não consentimento de transferência de determinados tipos de ficheiro (ex: executáveis) e à garantia de que, em consonância com a autonomia que normalmente é delegada no elemento responsável pela operação das comunicações, apenas se efectuam transferências de informação no sentido da rede de classificação inferior para a rede de classificação mais elevada. Face ao valor da informação e do impacto na imagem deste tipo de organizações, o operador de comunicações que não cumpra escrupulosamente o determinado é inexoravelmente afastado dessas funções, sendo que o processo de apuramento de responsabilidades nem sempre poderá determinar de forma inequívoca se as razões apontam para um acto deliberado ou para factores não intencionais, como a inépcia, o descuido ou a fadiga. Na realidade, as actividades periódicas e rotineiras, tornam o homem propenso à falha e poderão ser incontornavelmente asseguradas, sem qualquer tipo de constrangimentos ou diminuição de garantias, por soluções tecnológicas, desde que devidamente parametrizadas, adaptadas, testadas e amadurecidas, libertando os recursos humanos para tarefas de manutenção, gestão, controlo e inspecção. Acresce que, para este tipo de organizações, onde se multiplicam o número de redes de entrada de informação, com diferentes classificações e actores distintos, e com destinatários específicos, a utilização deste tipo de mecanismos assume uma importância capital. Devido a este factor multiplicativo, impõe-se que o NAGC represente uma opção válida em termos de oferta tecnológica, designadamente para uma gama de produtos de baixíssimo custo e que possa desenvolver-se por camadas de contributo complementar, em função das reais necessidades de cada cenário.
Resumo:
The devastating impact of the Sumatra tsunami of 26 December 2004, raised the question for scientists of how to forecast a tsunami threat. In 2005, the IOC-UNESCO XXIII assembly decided to implement a global tsunami warning system to cover the regions that were not yet protected, namely the Indian Ocean, the Caribbean and the North East Atlantic, the Mediterranean and connected seas (the NEAM region). Within NEAM, the Gulf of Cadiz is the more sensitive area, with an important record of devastating historical events. The objective of this paper is to present a preliminary design for a reliable tsunami detection network for the Gulf of Cadiz, based on a network of sea-level observatories. The tsunamigenic potential of this region has been revised in order to define the active tectonic structures. Tsunami hydrodynamic modeling and GIS technology have been used to identify the appropriate locations for the minimum number of sea-level stations. Results show that 3 tsunameters are required as the minimum number of stations necessary to assure an acceptable protection to the large coastal population in the Gulf of Cadiz. In addition, 29 tide gauge stations could be necessary to fully assess the effects of a tsunami along the affected coasts of Portugal, Spain and Morocco.
Resumo:
This paper presents an artificial neural network approach for short-term wind power forecasting in Portugal. The increased integration of wind power into the electric grid, as nowadays occurs in Portugal, poses new challenges due to its intermittency and volatility. Hence, good forecasting tools play a key role in tackling these challenges. The accuracy of the wind power forecasting attained with the proposed approach is evaluated against persistence and ARIMA approaches, reporting the numerical results from a real-world case study.
Resumo:
Recently, several distributed video coding (DVC) solutions based on the distributed source coding (DSC) paradigm have appeared in the literature. Wyner-Ziv (WZ) video coding, a particular case of DVC where side information is made available at the decoder, enable to achieve a flexible distribution of the computational complexity between the encoder and decoder, promising to fulfill novel requirements from applications such as video surveillance, sensor networks and mobile camera phones. The quality of the side information at the decoder has a critical role in determining the WZ video coding rate-distortion (RD) performance, notably to raise it to a level as close as possible to the RD performance of standard predictive video coding schemes. Towards this target, efficient motion search algorithms for powerful frame interpolation are much needed at the decoder. In this paper, the RD performance of a Wyner-Ziv video codec is improved by using novel, advanced motion compensated frame interpolation techniques to generate the side information. The development of these type of side information estimators is a difficult problem in WZ video coding, especially because the decoder only has available some reference, decoded frames. Based on the regularization of the motion field, novel side information creation techniques are proposed in this paper along with a new frame interpolation framework able to generate higher quality side information at the decoder. To illustrate the RD performance improvements, this novel side information creation framework has been integrated in a transform domain turbo coding based Wyner-Ziv video codec. Experimental results show that the novel side information creation solution leads to better RD performance than available state-of-the-art side information estimators, with improvements up to 2 dB: moreover, it allows outperforming H.264/AVC Intra by up to 3 dB with a lower encoding complexity.
Resumo:
The advances made in channel-capacity codes, such as turbo codes and low-density parity-check (LDPC) codes, have played a major role in the emerging distributed source coding paradigm. LDPC codes can be easily adapted to new source coding strategies due to their natural representation as bipartite graphs and the use of quasi-optimal decoding algorithms, such as belief propagation. This paper tackles a relevant scenario in distributedvideo coding: lossy source coding when multiple side information (SI) hypotheses are available at the decoder, each one correlated with the source according to different correlation noise channels. Thus, it is proposed to exploit multiple SI hypotheses through an efficient joint decoding technique withmultiple LDPC syndrome decoders that exchange information to obtain coding efficiency improvements. At the decoder side, the multiple SI hypotheses are created with motion compensated frame interpolation and fused together in a novel iterative LDPC based Slepian-Wolf decoding algorithm. With the creation of multiple SI hypotheses and the proposed decoding algorithm, bitrate savings up to 8.0% are obtained for similar decoded quality.
Resumo:
In recent years, power systems have experienced many changes in their paradigm. The introduction of new players in the management of distributed generation leads to the decentralization of control and decision-making, so that each player is able to play in the market environment. In the new context, it will be very relevant that aggregator players allow midsize, small and micro players to act in a competitive environment. In order to achieve their objectives, virtual power players and single players are required to optimize their energy resource management process. To achieve this, it is essential to have financial resources capable of providing access to appropriate decision support tools. As small players have difficulties in having access to such tools, it is necessary that these players can benefit from alternative methodologies to support their decisions. This paper presents a methodology, based on Artificial Neural Networks (ANN), and intended to support smaller players. In this case the present methodology uses a training set that is created using energy resource scheduling solutions obtained using a mixed-integer linear programming (MIP) approach as the reference optimization methodology. The trained network is used to obtain locational marginal prices in a distribution network. The main goal of the paper is to verify the accuracy of the ANN based approach. Moreover, the use of a single ANN is compared with the use of two or more ANN to forecast the locational marginal price.
Resumo:
Smart Grids (SGs) appeared as the new paradigm for power system management and operation, being designed to integrate large amounts of distributed energy resources. This new paradigm requires a more efficient Energy Resource Management (ERM) and, simultaneously, makes this a more complex problem, due to the intensive use of distributed energy resources (DER), such as distributed generation, active consumers with demand response contracts, and storage units. This paper presents a methodology to address the energy resource scheduling, considering an intensive use of distributed generation and demand response contracts. A case study of a 30 kV real distribution network, including a substation with 6 feeders and 937 buses, is used to demonstrate the effectiveness of the proposed methodology. This network is managed by six virtual power players (VPP) with capability to manage the DER and the distribution network.
Resumo:
This paper presents a methodology that aims to increase the probability of delivering power to any load point of the electrical distribution system by identifying new investments in distribution components. The methodology is based on statistical failure and repair data of the distribution power system components and it uses fuzzy-probabilistic modelling for system component outage parameters. Fuzzy membership functions of system component outage parameters are obtained by statistical records. A mixed integer non-linear optimization technique is developed to identify adequate investments in distribution networks components that allow increasing the availability level for any customer in the distribution system at minimum cost for the system operator. To illustrate the application of the proposed methodology, the paper includes a case study that considers a real distribution network.
Resumo:
We study a model consisting of particles with dissimilar bonding sites ("patches"), which exhibits self-assembly into chains connected by Y-junctions, and investigate its phase behaviour by both simulations and theory. We show that, as the energy cost epsilon(j) of forming Y-junctions increases, the extent of the liquid-vapour coexistence region at lower temperatures and densities is reduced. The phase diagram thus acquires a characteristic "pinched" shape in which the liquid branch density decreases as the temperature is lowered. To our knowledge, this is the first model in which the predicted topological phase transition between a fluid composed of short chains and a fluid rich in Y-junctions is actually observed. Above a certain threshold for epsilon(j), condensation ceases to exist because the entropy gain of forming Y-junctions can no longer offset their energy cost. We also show that the properties of these phase diagrams can be understood in terms of a temperature-dependent effective valence of the patchy particles. (C) 2011 American Institute of Physics. [doi: 10.1063/1.3605703]
Resumo:
We introduce a microscopic model for particles with dissimilar patches which displays an unconventional "pinched'' phase diagram, similar to the one predicted by Tlusty and Safran in the context of dipolar fluids [Science 290, 1328 (2000)]. The model-based on two types of patch interactions, which account, respectively, for chaining and branching of the self-assembled networks-is studied both numerically via Monte Carlo simulations and theoretically via first-order perturbation theory. The dense phase is rich in junctions, while the less-dense phase is rich in chain ends. The model provides a reference system for a deep understanding of the competition between condensation and self-assembly into equilibrium-polymer chains.
Resumo:
In competitive electricity markets with deep concerns for the efficiency level, demand response programs gain considerable significance. As demand response levels have decreased after the introduction of competition in the power industry, new approaches are required to take full advantage of demand response opportunities. Grid operators and utilities are taking new initiatives, recognizing the value of demand response for grid reliability and for the enhancement of organized spot markets’ efficiency. This paper proposes a methodology for the selection of the consumers that participate in an event, which is the responsibility of the Portuguese transmission network operator. The proposed method is intended to be applied in the interruptibility service implemented in Portugal, in convergence with Spain, in the context of the Iberian electricity market. This method is based on the calculation of locational marginal prices (LMP) which are used to support the decision concerning the consumers to be schedule for participation. The proposed method has been computationally implemented and its application is illustrated in this paper using a 937 bus distribution network with more than 20,000 consumers.