28 resultados para information content
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
In the field of appearance-based robot localization, the mainstream approach uses a quantized representation of local image features. An alternative strategy is the exploitation of raw feature descriptors, thus avoiding approximations due to quantization. In this work, the quantized and non-quantized representations are compared with respect to their discriminativity, in the context of the robot global localization problem. Having demonstrated the advantages of the non-quantized representation, the paper proposes mechanisms to reduce the computational burden this approach would carry, when applied in its simplest form. This reduction is achieved through a hierarchical strategy which gradually discards candidate locations and by exploring two simplifying assumptions about the training data. The potential of the non-quantized representation is exploited by resorting to the entropy-discriminativity relation. The idea behind this approach is that the non-quantized representation facilitates the assessment of the distinctiveness of features, through the entropy measure. Building on this finding, the robustness of the localization system is enhanced by modulating the importance of features according to the entropy measure. Experimental results support the effectiveness of this approach, as well as the validity of the proposed computation reduction methods.
Resumo:
Video coding technologies have played a major role in the explosion of large market digital video applications and services. In this context, the very popular MPEG-x and H-26x video coding standards adopted a predictive coding paradigm, where complex encoders exploit the data redundancy and irrelevancy to 'control' much simpler decoders. This codec paradigm fits well applications and services such as digital television and video storage where the decoder complexity is critical, but does not match well the requirements of emerging applications such as visual sensor networks where the encoder complexity is more critical. The Slepian Wolf and Wyner-Ziv theorems brought the possibility to develop the so-called Wyner-Ziv video codecs, following a different coding paradigm where it is the task of the decoder, and not anymore of the encoder, to (fully or partly) exploit the video redundancy. Theoretically, Wyner-Ziv video coding does not incur in any compression performance penalty regarding the more traditional predictive coding paradigm (at least for certain conditions). In the context of Wyner-Ziv video codecs, the so-called side information, which is a decoder estimate of the original frame to code, plays a critical role in the overall compression performance. For this reason, much research effort has been invested in the past decade to develop increasingly more efficient side information creation methods. This paper has the main objective to review and evaluate the available side information methods after proposing a classification taxonomy to guide this review, allowing to achieve more solid conclusions and better identify the next relevant research challenges. After classifying the side information creation methods into four classes, notably guess, try, hint and learn, the review of the most important techniques in each class and the evaluation of some of them leads to the important conclusion that the side information creation methods provide better rate-distortion (RD) performance depending on the amount of temporal correlation in each video sequence. It became also clear that the best available Wyner-Ziv video coding solutions are almost systematically based on the learn approach. The best solutions are already able to systematically outperform the H.264/AVC Intra, and also the H.264/AVC zero-motion standard solutions for specific types of content. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
We live in a changing world. At an impressive speed, every day new technological resources appear. We increasingly use the Internet to obtain and share information, and new online communication tools are emerging. Each of them encompasses new potential and creates new audiences. In recent years, we witnessed the emergence of Facebook, Twitter, YouTube and other media platforms. They have provided us with an even greater interactivity between sender and receiver, as well as generated a new sense of community. At the same time we also see the availability of content like it never happened before. We are increasingly sharing texts, videos, photos, etc. This poster intends to explore the potential of using these new online communication tools in the cultural sphere to create new audiences, to develop of a new kind of community, to provide information as well as different ways of building organizations’ memory. The transience of performing arts is accompanied by the need to counter that transience by means of documentation. This desire to ‘save’ events reaches its expression with the information archive of the different production moments as well as the opportunity to record the event and present it through, for instance, digital platforms. In this poster we intend to answer the following questions: which online communication tools are being used to engage audiences in the cultural sphere (specifically between theater companies in Lisbon)? Is there a new relationship with the public? Are online communication tools creating a new kind of community? What changes are these tools introducing in the creative process? In what way the availability of content and its archive contribute to the organization memory? Among several references, we will approach the two-way communication model that James E. Grunig & Todd T. Hunt (1984) already presented and the concept of mass self-communication of Manuel Castells (2010). Castells also tells us that we have moved from traditional media to a system of communication networks. For Scott Kirsner (2010), we have entered an era of digital creativity, where artists have the tools to do what they imagined and the public no longer wants to just consume cultural goods, but instead to have a voice and participate. The creativity process is now depending on the public choice as they wander through the screen. It is the receiver who owns an object which can be exchanged. Virtual reality has encouraged the receiver to abandon its position of passive observer and to become a participant agent, which implies a challenge to organizations: inventing new forms of interfaces. Therefore, we intend to find new and effective online tools that can be used by cultural organizations; the best way to manage them; to show how organizations can create a community with the public and how the availability of online content and its archive can contribute to the organizations’ memory.
Resumo:
Sticky information monetary models have been used in the macroeconomic literature to explain some of the observed features regarding inflation dynamics. In this paper, we explore the consequences of relaxing the rational expectations assumption usually taken in this type of model; in particular, by considering expectations formed through adaptive learning, it is possible to arrive to results other than the trivial convergence to a fixed point long-term equilibrium. The results involve the possibility of endogenous cyclical motion (periodic and a-periodic), which emerges essentially in scenarios of hyperinflation. In low inflation settings, the introduction of learning implies a less severe impact of monetary shocks that, nevertheless, tend to last for additional time periods relative to the pure perfect foresight setup.
Resumo:
The deposition of amyloid fibers at the peripheral nervous system can induce motor neuropathy in Familial Amiloidotic Polyneuropethy (FAP) patients. This produces progressive reductions in functional capacity. The only treatment for FAP is a liver transplant, followed by aggressive medication that can affect patients' metabolism. To our knowledge, there are no data on body fat distribution or comparison between healthy and FAP subjects, which may be important for clinical assessment and management of this disease. PURPOSE: To analyze body fat content and distribution between FAP patients and healthy subjects. METHODS: Body fat content and distribution were measured through Double Energy X-ray Densitometry (DXA) in two groups. Group 1 consisted of 43 Familial Amyloidotic Polyneuropathy patients (19 males, 32 + 8 Yrs, and 24 females, 37 + 5 yrs), who had liver transplant less than 2 months before. Group 2 consisted of 18 healthy subjects of similar age (8 males, 36 + 7 yrs, and 10 females, 39 + 5 yrs). RESULTS: Healthy subjects showed higher values than FAP patients for: BMI (24,2+2,3kg/m2 vs 22,3+3,8 kg/m2 respectively, p<0,05), % trunk BF (26,21+8,34kg vs 20,78+9,05kg respectively, p<0,05), % visceral BF (24,43+7,97% vs 19,21+9,30% respectively, p<0,05), % abdominal BF (26,63+8,51% vs 20,63+10,35% respectively, p<0,05) abdominal subcutaneous BF (0,533+0,421kg vs 0,353+0,257kg respectively, p=0,05), abdominal BF/BF ratio (0,09+0,02 vs 0,08+0,02 respectively, p<0,05) and abdominal BF/trunk BF ratio (0,19+0,03 vs 0,17+0,03 respectively, p<0,05). CONCLUSIONS: These results showed that FAP patients soon after liver transplantation exhibited a healthier body fat profile compared to controls. However, fat content and distribution varied widely in FAP subjects, suggesting an individualized approach for assessment and intervention rather than general guidelines. Future research is needed to investigate the long term consequences on body fat following liver transplant in this population.
Resumo:
The deposition of amyloid fibers at the peripheral nervous system can induce motor neuropathy in Familial Amiloidotic Polyneuropethy (FAP) patients. This produces progressive reductions in functional capacity. The only treatment for FAP is a liver transplant, followed by aggressive medication that can affect patients' metabolism. To our knowledge, there are no data on body fat distribution or comparison between healthy and FAP subjects, which may be important for clinical assessment and management of this disease.
Resumo:
The Wyner-Ziv video coding (WZVC) rate distortion performance is highly dependent on the quality of the side information, an estimation of the original frame, created at the decoder. This paper, characterizes the WZVC efficiency when motion compensated frame interpolation (MCFI) techniques are used to generate the side information, a difficult problem in WZVC especially because the decoder only has available some reference decoded frames. The proposed WZVC compression efficiency rate model relates the power spectral of the estimation error to the accuracy of the MCFI motion field. Then, some interesting conclusions may be derived related to the impact of the motion field smoothness and the correlation to the true motion trajectories on the compression performance.
Resumo:
In the Sparse Point Representation (SPR) method the principle is to retain the function data indicated by significant interpolatory wavelet coefficients, which are defined as interpolation errors by means of an interpolating subdivision scheme. Typically, a SPR grid is coarse in smooth regions, and refined close to irregularities. Furthermore, the computation of partial derivatives of a function from the information of its SPR content is performed in two steps. The first one is a refinement procedure to extend the SPR by the inclusion of new interpolated point values in a security zone. Then, for points in the refined grid, such derivatives are approximated by uniform finite differences, using a step size proportional to each point local scale. If required neighboring stencils are not present in the grid, the corresponding missing point values are approximated from coarser scales using the interpolating subdivision scheme. Using the cubic interpolation subdivision scheme, we demonstrate that such adaptive finite differences can be formulated in terms of a collocation scheme based on the wavelet expansion associated to the SPR. For this purpose, we prove some results concerning the local behavior of such wavelet reconstruction operators, which stand for SPR grids having appropriate structures. This statement implies that the adaptive finite difference scheme and the one using the step size of the finest level produce the same result at SPR grid points. Consequently, in addition to the refinement strategy, our analysis indicates that some care must be taken concerning the grid structure, in order to keep the truncation error under a certain accuracy limit. Illustrating results are presented for 2D Maxwell's equation numerical solutions.
Resumo:
One of the most efficient approaches to generate the side information (SI) in distributed video codecs is through motion compensated frame interpolation where the current frame is estimated based on past and future reference frames. However, this approach leads to significant spatial and temporal variations in the correlation noise between the source at the encoder and the SI at the decoder. In such scenario, it would be useful to design an architecture where the SI can be more robustly generated at the block level, avoiding the creation of SI frame regions with lower correlation, largely responsible for some coding efficiency losses. In this paper, a flexible framework to generate SI at the block level in two modes is presented: while the first mode corresponds to a motion compensated interpolation (MCI) technique, the second mode corresponds to a motion compensated quality enhancement (MCQE) technique where a low quality Intra block sent by the encoder is used to generate the SI by doing motion estimation with the help of the reference frames. The novel MCQE mode can be overall advantageous from the rate-distortion point of view, even if some rate has to be invested in the low quality Intra coding blocks, for blocks where the MCI produces SI with lower correlation. The overall solution is evaluated in terms of RD performance with improvements up to 2 dB, especially for high motion video sequences and long Group of Pictures (GOP) sizes.
Resumo:
Motion compensated frame interpolation (MCFI) is one of the most efficient solutions to generate side information (SI) in the context of distributed video coding. However, it creates SI with rather significant motion compensated errors for some frame regions while rather small for some other regions depending on the video content. In this paper, a low complexity Infra mode selection algorithm is proposed to select the most 'critical' blocks in the WZ frame and help the decoder with some reliable data for those blocks. For each block, the novel coding mode selection algorithm estimates the encoding rate for the Intra based and WZ coding modes and determines the best coding mode while maintaining a low encoder complexity. The proposed solution is evaluated in terms of rate-distortion performance with improvements up to 1.2 dB regarding a WZ coding mode only solution.
Resumo:
Wyner-Ziv (WZ) video coding is a particular case of distributed video coding, the recent video coding paradigm based on the Slepian-Wolf and Wyner-Ziv theorems that exploits the source correlation at the decoder and not at the encoder as in predictive video coding. Although many improvements have been done over the last years, the performance of the state-of-the-art WZ video codecs still did not reach the performance of state-of-the-art predictive video codecs, especially for high and complex motion video content. This is also true in terms of subjective image quality mainly because of a considerable amount of blocking artefacts present in the decoded WZ video frames. This paper proposes an adaptive deblocking filter to improve both the subjective and objective qualities of the WZ frames in a transform domain WZ video codec. The proposed filter is an adaptation of the advanced deblocking filter defined in the H.264/AVC (advanced video coding) standard to a WZ video codec. The results obtained confirm the subjective quality improvement and objective quality gains that can go up to 0.63 dB in the overall for sequences with high motion content when large group of pictures are used.
Resumo:
One Plus Sequential Air Sampler—Partisol was placed in a small village (Foros de Arrão) in central Portugal to collect PM10 (particles with an aerodynamic diameter below 10 μm), during the winter period for 3 months (December 2009–March 2010). Particles masses were gravimetrically determined and the filters were analyzed by instrumental neutron activation analysis to assess their chemical composition. The water-soluble ion compositions of the collected particles were determined by Ion-exchange Chromatography. Principal component analysis was applied to the data set of chemical elements and soluble ions to assess the main sources of the air pollutants. The use of both analytical techniques provided information about elemental solubility, such as for potassium, which was important to differentiate sources.
Resumo:
Recently, several distributed video coding (DVC) solutions based on the distributed source coding (DSC) paradigm have appeared in the literature. Wyner-Ziv (WZ) video coding, a particular case of DVC where side information is made available at the decoder, enable to achieve a flexible distribution of the computational complexity between the encoder and decoder, promising to fulfill novel requirements from applications such as video surveillance, sensor networks and mobile camera phones. The quality of the side information at the decoder has a critical role in determining the WZ video coding rate-distortion (RD) performance, notably to raise it to a level as close as possible to the RD performance of standard predictive video coding schemes. Towards this target, efficient motion search algorithms for powerful frame interpolation are much needed at the decoder. In this paper, the RD performance of a Wyner-Ziv video codec is improved by using novel, advanced motion compensated frame interpolation techniques to generate the side information. The development of these type of side information estimators is a difficult problem in WZ video coding, especially because the decoder only has available some reference, decoded frames. Based on the regularization of the motion field, novel side information creation techniques are proposed in this paper along with a new frame interpolation framework able to generate higher quality side information at the decoder. To illustrate the RD performance improvements, this novel side information creation framework has been integrated in a transform domain turbo coding based Wyner-Ziv video codec. Experimental results show that the novel side information creation solution leads to better RD performance than available state-of-the-art side information estimators, with improvements up to 2 dB: moreover, it allows outperforming H.264/AVC Intra by up to 3 dB with a lower encoding complexity.
Resumo:
Preliminary version
Resumo:
The aim of this paper is to establish some basic guidelines to help draft the information letter sent to individual contributors should it be decided to use this model in the Spanish public pension system. With this end in mind and basing our work on the experiences of the most advanced countries in the field and the pioneering papers by Jackson (2005), Larsson et al. (2008) and Sunden (2009), we look into the concept of “individual pension information” and identify its most relevant characteristics. We then give a detailed description of two models, those in the United States and Sweden, and in particular look at how they are structured, what aspects could be improved and what their limitations are. Finally we make some recommendations of special interest for designing the model for Spain.