34 resultados para Information Operations
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
Sticky information monetary models have been used in the macroeconomic literature to explain some of the observed features regarding inflation dynamics. In this paper, we explore the consequences of relaxing the rational expectations assumption usually taken in this type of model; in particular, by considering expectations formed through adaptive learning, it is possible to arrive to results other than the trivial convergence to a fixed point long-term equilibrium. The results involve the possibility of endogenous cyclical motion (periodic and a-periodic), which emerges essentially in scenarios of hyperinflation. In low inflation settings, the introduction of learning implies a less severe impact of monetary shocks that, nevertheless, tend to last for additional time periods relative to the pure perfect foresight setup.
Resumo:
The Wyner-Ziv video coding (WZVC) rate distortion performance is highly dependent on the quality of the side information, an estimation of the original frame, created at the decoder. This paper, characterizes the WZVC efficiency when motion compensated frame interpolation (MCFI) techniques are used to generate the side information, a difficult problem in WZVC especially because the decoder only has available some reference decoded frames. The proposed WZVC compression efficiency rate model relates the power spectral of the estimation error to the accuracy of the MCFI motion field. Then, some interesting conclusions may be derived related to the impact of the motion field smoothness and the correlation to the true motion trajectories on the compression performance.
Resumo:
One of the most efficient approaches to generate the side information (SI) in distributed video codecs is through motion compensated frame interpolation where the current frame is estimated based on past and future reference frames. However, this approach leads to significant spatial and temporal variations in the correlation noise between the source at the encoder and the SI at the decoder. In such scenario, it would be useful to design an architecture where the SI can be more robustly generated at the block level, avoiding the creation of SI frame regions with lower correlation, largely responsible for some coding efficiency losses. In this paper, a flexible framework to generate SI at the block level in two modes is presented: while the first mode corresponds to a motion compensated interpolation (MCI) technique, the second mode corresponds to a motion compensated quality enhancement (MCQE) technique where a low quality Intra block sent by the encoder is used to generate the SI by doing motion estimation with the help of the reference frames. The novel MCQE mode can be overall advantageous from the rate-distortion point of view, even if some rate has to be invested in the low quality Intra coding blocks, for blocks where the MCI produces SI with lower correlation. The overall solution is evaluated in terms of RD performance with improvements up to 2 dB, especially for high motion video sequences and long Group of Pictures (GOP) sizes.
Resumo:
Os sistemas de armas da Força Aérea Portuguesa (FAP) têm por missão a defesa militar de Portugal, através de operações aéreas e da defesa do espaço aéreo nacional, sendo o F-16 o principal avião de ataque em uso nesta organização. Neste sentido, e tendo em conta o actual contexto económico mundial, as organizações devem rentabilizar todos os recursos disponíveis, custos associados e optimizar processos de trabalho. Tendo por base os pressupostos anteriores, o presente estudo pretende analisar a implementação de lean na FAP, uma vez que esta filosofia assenta na eliminação de desperdícios com vista a uma melhoria da qualidade e diminuição de tempos e custos. Posto isto, a análise deste trabalho vai recair sobre a área de manutenção do F-16, em concreto na Inspeção de Fase (IF), um tipo de manutenção que esta aeronave realiza a cada trezentas horas de voo. O estudo de caso vai incidir em dois momentos da IF: o primeiro ponto relaciona-se com o processamento da recolha de dados para a reunião preliminar onde são definidas, para as áreas de trabalho executantes, as ações de manutenção a realizar com a paragem da aeronave. Deste modo, pretende-se averiguar as causas inerentes aos atrasos verificados para a realização desta reunião. O segundo ponto em observação compreende a informação obtida através da aplicação informática SIAGFA, em uso na FAP, para o processamento de dados de manutenção das quatro aeronaves que inauguraram a IF com a filosofia lean. Esta análise permitiu perceber o número de horas de trabalho dispendidas (em média pelas quatro aeronaves) por cada uma das cartas de trabalho, verificando-se que as cartas adicionais comportam mais horas; foi possível compreender quais as áreas de trabalho consideradas críticas; foram identificados os dias de trabalho realizado e tempos de paragem sem qualquer tipo de intervenção. Foi ainda avaliado, por aeronave, o número de horas de trabalho realizadas na IF e quais os constrangimentos que se verificaram nas aeronaves, que não realizaram a IF no tempo definido para tal.
Resumo:
A evolução tecnológica e das sociedades permitiu que, hoje em dia, uma boa parte da população tenha acesso a dispositivos móveis com funcionalidades avançadas. Com este tipo de dispositivos, temos acesso a inúmeras fontes de informação em tempo-real, mas esta característica ainda não é, hoje em dia, aproveitada na sua totalidade. Este projecto tenta tirar partido desta realidade para, utilizando os diversos dispositivos móveis, criar uma rede de troca de informações de trânsito. O utilizador apenas necessita de servir-se do seu dispositivo móvel para, automaticamente, obter as mais recentes informações de trânsito enquanto, paralelamente, partilha com os outros utilizadores a sua informação. Apesar de existirem outras alternativas no mercado, com soluções que permitem usufruir do mesmo tipo de funcionalidades, nenhuma utiliza este tipo de dispositivos (GPS’s convencionais, por exemplo). Um dos requisitos necessário na implementação deste projecto é uma solução de geocoding. Após terem sido testadas várias soluções, nenhuma cumpria, na totalidade, os requisitos deste projecto, o que originou o desenvolvimento de uma nova solução que cumpre esses requisitos. A solução é, toda ela, muito modular, formada por vários componentes, cada um com responsabilidades bem identificadas. A arquitectura desta solução baseia-se nos padrões de desenvolvimento de uma Service Oriented Architecture. Todos os componentes disponibilizam as suas operações através de web services, e a sua descoberta recorre ao protocolo WS-Discovery. Estes vários componentes podem ser divididos em duas categorias: os do núcleo, responsáveis por criar e oferecer as funcionalidades requisitadas neste projecto e os módulos externos, nos quais se incluem as aplicações que apresentam as funcionalidades ao utilizador. Foram criadas duas formas de consumir a informação oferecida pelo serviço SIAT: a aplicação móvel e um website. No âmbito dos dispositivos móveis, foi desenvolvida uma aplicação para o sistema operativo Windows Phone 7.
Resumo:
Nowadays, the cooperative intelligent transport systems are part of a largest system. Transportations are modal operations integrated in logistics and, logistics is the main process of the supply chain management. The supply chain strategic management as a simultaneous local and global value chain is a collaborative/cooperative organization of stakeholders, many times in co-opetition, to perform a service to the customers respecting the time, place, price and quality levels. The transportation, like other logistics operations must add value, which is achieved in this case through compression lead times and order fulfillments. The complex supplier's network and the distribution channels must be efficient and the integral visibility (monitoring and tracing) of supply chain is a significant source of competitive advantage. Nowadays, the competition is not discussed between companies but among supply chains. This paper aims to evidence the current and emerging manufacturing and logistics system challenges as a new field of opportunities for the automation and control systems research community. Furthermore, the paper forecasts the use of radio frequency identification (RFID) technologies integrated into an information and communication technologies (ICT) framework based on distributed artificial intelligence (DAI) supported by a multi-agent system (MAS), as the most value advantage of supply chain management (SCM) in a cooperative intelligent logistics systems. Logistical platforms (production or distribution) as nodes of added value of supplying and distribution networks are proposed as critical points of the visibility of the inventory, where these technological needs are more evident.
Resumo:
Recently, several distributed video coding (DVC) solutions based on the distributed source coding (DSC) paradigm have appeared in the literature. Wyner-Ziv (WZ) video coding, a particular case of DVC where side information is made available at the decoder, enable to achieve a flexible distribution of the computational complexity between the encoder and decoder, promising to fulfill novel requirements from applications such as video surveillance, sensor networks and mobile camera phones. The quality of the side information at the decoder has a critical role in determining the WZ video coding rate-distortion (RD) performance, notably to raise it to a level as close as possible to the RD performance of standard predictive video coding schemes. Towards this target, efficient motion search algorithms for powerful frame interpolation are much needed at the decoder. In this paper, the RD performance of a Wyner-Ziv video codec is improved by using novel, advanced motion compensated frame interpolation techniques to generate the side information. The development of these type of side information estimators is a difficult problem in WZ video coding, especially because the decoder only has available some reference, decoded frames. Based on the regularization of the motion field, novel side information creation techniques are proposed in this paper along with a new frame interpolation framework able to generate higher quality side information at the decoder. To illustrate the RD performance improvements, this novel side information creation framework has been integrated in a transform domain turbo coding based Wyner-Ziv video codec. Experimental results show that the novel side information creation solution leads to better RD performance than available state-of-the-art side information estimators, with improvements up to 2 dB: moreover, it allows outperforming H.264/AVC Intra by up to 3 dB with a lower encoding complexity.
Resumo:
Preliminary version
Resumo:
The aim of this paper is to establish some basic guidelines to help draft the information letter sent to individual contributors should it be decided to use this model in the Spanish public pension system. With this end in mind and basing our work on the experiences of the most advanced countries in the field and the pioneering papers by Jackson (2005), Larsson et al. (2008) and Sunden (2009), we look into the concept of “individual pension information” and identify its most relevant characteristics. We then give a detailed description of two models, those in the United States and Sweden, and in particular look at how they are structured, what aspects could be improved and what their limitations are. Finally we make some recommendations of special interest for designing the model for Spain.
Resumo:
The aim of this study is to assess the levels of airborne ultrafine particles emitted in welding processes (tungsten inert gas [TIG], metal active gas [MAG] of carbon steel, and friction stir welding [FSW] of aluminum) in terms of deposited area in pulmonary alveolar tract using a nanoparticle surface area monitor (NSAM) analyzer. The obtained results showed the dependence of process parameters on emitted ultrafine particles and demonstrated the presence of ultrafine particles compared to background levels. Data indicated that the process that resulted in the lowest levels of alveolar deposited surface area (ADSA) was FSW, followed by TIG and MAG. However, all tested processes resulted in significant concentrations of ultrafine particles being deposited in humans lungs of exposed workers.
Resumo:
The advances made in channel-capacity codes, such as turbo codes and low-density parity-check (LDPC) codes, have played a major role in the emerging distributed source coding paradigm. LDPC codes can be easily adapted to new source coding strategies due to their natural representation as bipartite graphs and the use of quasi-optimal decoding algorithms, such as belief propagation. This paper tackles a relevant scenario in distributedvideo coding: lossy source coding when multiple side information (SI) hypotheses are available at the decoder, each one correlated with the source according to different correlation noise channels. Thus, it is proposed to exploit multiple SI hypotheses through an efficient joint decoding technique withmultiple LDPC syndrome decoders that exchange information to obtain coding efficiency improvements. At the decoder side, the multiple SI hypotheses are created with motion compensated frame interpolation and fused together in a novel iterative LDPC based Slepian-Wolf decoding algorithm. With the creation of multiple SI hypotheses and the proposed decoding algorithm, bitrate savings up to 8.0% are obtained for similar decoded quality.
Resumo:
O sistema de telegestão é uma ferramenta que permite a gestão, em tempo real, de todo o sistema de abastecimento da Empresa Portuguesa das Águas Livres, S.A. (EPAL). Esta gestão pode ser conseguida desde a captação da água até à sua entrega ao cliente final, através dos meios de monitorização necessários às operações de comando que permitem controlar e manobrar à distância os acessórios do sistema (estações elevatórias, reservatórios, válvulas,…). A presente dissertação visa a divulgação e compilação de elementos fundamentais para a otimização das potencialidades que a telegestão oferece, abordando assim, dada a sua especificidade, um tema pouco divulgado mas de extrema importância a quem trabalha ou pretende trabalhar numa entidade gestora similar. Assim, a dissertação é constituída por seis capítulos que compreendem a caracterização do sistema de adução, transporte e distribuição da EPAL, a abordagem genérica das ferramentas de suporte à exploração do sistema, uma resenha histórica do sistema de telegestão na EPAL, bem como informações referentes ao atual sistema de telegestão, nomeadamente a sua arquitetura, principais funcionalidades, tais como o controlo de órgãos de manobra à distância e análise de parâmetros de qualidade em tempo real. Finalmente, apresentam-se algumas conclusões e recomendações para trabalhos futuros. Pretende-se assim que o presente documento contribua para uma aglutinação de informações relativas aos sistemas de telegestão para abastecimento de água, respetivas vantagens aliadas às suas funcionalidades, bem como a identificação de fragilidades do sistema que poderão ser aperfeiçoadas ou mesmo eliminadas.
Resumo:
Mestrado em Contabilidade e Gestão das Instituições Financeiras
Resumo:
Video coding technologies have played a major role in the explosion of large market digital video applications and services. In this context, the very popular MPEG-x and H-26x video coding standards adopted a predictive coding paradigm, where complex encoders exploit the data redundancy and irrelevancy to 'control' much simpler decoders. This codec paradigm fits well applications and services such as digital television and video storage where the decoder complexity is critical, but does not match well the requirements of emerging applications such as visual sensor networks where the encoder complexity is more critical. The Slepian Wolf and Wyner-Ziv theorems brought the possibility to develop the so-called Wyner-Ziv video codecs, following a different coding paradigm where it is the task of the decoder, and not anymore of the encoder, to (fully or partly) exploit the video redundancy. Theoretically, Wyner-Ziv video coding does not incur in any compression performance penalty regarding the more traditional predictive coding paradigm (at least for certain conditions). In the context of Wyner-Ziv video codecs, the so-called side information, which is a decoder estimate of the original frame to code, plays a critical role in the overall compression performance. For this reason, much research effort has been invested in the past decade to develop increasingly more efficient side information creation methods. This paper has the main objective to review and evaluate the available side information methods after proposing a classification taxonomy to guide this review, allowing to achieve more solid conclusions and better identify the next relevant research challenges. After classifying the side information creation methods into four classes, notably guess, try, hint and learn, the review of the most important techniques in each class and the evaluation of some of them leads to the important conclusion that the side information creation methods provide better rate-distortion (RD) performance depending on the amount of temporal correlation in each video sequence. It became also clear that the best available Wyner-Ziv video coding solutions are almost systematically based on the learn approach. The best solutions are already able to systematically outperform the H.264/AVC Intra, and also the H.264/AVC zero-motion standard solutions for specific types of content. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
In distributed video coding, motion estimation is typically performed at the decoder to generate the side information, increasing the decoder complexity while providing low complexity encoding in comparison with predictive video coding. Motion estimation can be performed once to create the side information or several times to refine the side information quality along the decoding process. In this paper, motion estimation is performed at the decoder side to generate multiple side information hypotheses which are adaptively and dynamically combined, whenever additional decoded information is available. The proposed iterative side information creation algorithm is inspired in video denoising filters and requires some statistics of the virtual channel between each side information hypothesis and the original data. With the proposed denoising algorithm for side information creation, a RD performance gain up to 1.2 dB is obtained for the same bitrate.