29 resultados para Algebraic decoding

em Repositório Científico do Instituto Politécnico de Lisboa - Portugal


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stair nesting allows us to work with fewer observations than the most usual form of nesting, the balanced nesting. In the case of stair nesting the amount of information for the different factors is more evenly distributed. This new design leads to greater economy, because we can work with fewer observations. In this work we present the algebraic structure of the cross of balanced nested and stair nested designs, using binary operations on commutative Jordan algebras. This new cross requires fewer observations than the usual cross balanced nested designs and it is easy to carry out inference.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Binary operations on commutative Jordan algebras, CJA, can be used to study interactions between sets of factors belonging to a pair of models in which one nests the other. It should be noted that from two CJA we can, through these binary operations, build CJA. So when we nest the treatments from one model in each treatment of another model, we can study the interactions between sets of factors of the first and the second models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lossless compression algorithms of the Lempel-Ziv (LZ) family are widely used nowadays. Regarding time and memory requirements, LZ encoding is much more demanding than decoding. In order to speed up the encoding process, efficient data structures, like suffix trees, have been used. In this paper, we explore the use of suffix arrays to hold the dictionary of the LZ encoder, and propose an algorithm to search over it. We show that the resulting encoder attains roughly the same compression ratios as those based on suffix trees. However, the amount of memory required by the suffix array is fixed, and much lower than the variable amount of memory used by encoders based on suffix trees (which depends on the text to encode). We conclude that suffix arrays, when compared to suffix trees in terms of the trade-off among time, memory, and compression ratio, may be preferable in scenarios (e.g., embedded systems) where memory is at a premium and high speed is not critical.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We derive a set of differential inequalities for positive definite functions based on previous results derived for positive definite kernels by purely algebraic methods. Our main results show that the global behavior of a smooth positive definite function is, to a large extent, determined solely by the sequence of even-order derivatives at the origin: if a single one of these vanishes then the function is constant; if they are all non-zero and satisfy a natural growth condition, the function is real-analytic and consequently extends holomorphically to a maximal horizontal strip of the complex plane.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wyner - Ziv (WZ) video coding is a particular case of distributed video coding (DVC), the recent video coding paradigm based on the Slepian - Wolf and Wyner - Ziv theorems which exploits the source temporal correlation at the decoder and not at the encoder as in predictive video coding. Although some progress has been made in the last years, WZ video coding is still far from the compression performance of predictive video coding, especially for high and complex motion contents. The WZ video codec adopted in this study is based on a transform domain WZ video coding architecture with feedback channel-driven rate control, whose modules have been improved with some recent coding tools. This study proposes a novel motion learning approach to successively improve the rate-distortion (RD) performance of the WZ video codec as the decoding proceeds, making use of the already decoded transform bands to improve the decoding process for the remaining transform bands. The results obtained reveal gains up to 2.3 dB in the RD curves against the performance for the same codec without the proposed motion learning approach for high motion sequences and long group of pictures (GOP) sizes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The advances made in channel-capacity codes, such as turbo codes and low-density parity-check (LDPC) codes, have played a major role in the emerging distributed source coding paradigm. LDPC codes can be easily adapted to new source coding strategies due to their natural representation as bipartite graphs and the use of quasi-optimal decoding algorithms, such as belief propagation. This paper tackles a relevant scenario in distributedvideo coding: lossy source coding when multiple side information (SI) hypotheses are available at the decoder, each one correlated with the source according to different correlation noise channels. Thus, it is proposed to exploit multiple SI hypotheses through an efficient joint decoding technique withmultiple LDPC syndrome decoders that exchange information to obtain coding efficiency improvements. At the decoder side, the multiple SI hypotheses are created with motion compensated frame interpolation and fused together in a novel iterative LDPC based Slepian-Wolf decoding algorithm. With the creation of multiple SI hypotheses and the proposed decoding algorithm, bitrate savings up to 8.0% are obtained for similar decoded quality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mestrado em Educação Matemática na Educação Pré – Escolar e nos 1.º e 2.º Ciclos do Ensino Básico

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In distributed video coding, motion estimation is typically performed at the decoder to generate the side information, increasing the decoder complexity while providing low complexity encoding in comparison with predictive video coding. Motion estimation can be performed once to create the side information or several times to refine the side information quality along the decoding process. In this paper, motion estimation is performed at the decoder side to generate multiple side information hypotheses which are adaptively and dynamically combined, whenever additional decoded information is available. The proposed iterative side information creation algorithm is inspired in video denoising filters and requires some statistics of the virtual channel between each side information hypothesis and the original data. With the proposed denoising algorithm for side information creation, a RD performance gain up to 1.2 dB is obtained for the same bitrate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este texto incide sobre o tema do pensamento algébrico nos primeiros anos de escolaridade ao nível da formação contínua de professores. Na abordagem apresentada o tema dos padrões surge como contexto estruturante para o desenvolvimento do pensamento algébrico. Começa-se por fazer um enquadramento dos referenciais teóricos que fundamentam a adoção de uma proposta didática desenvolvida nesse âmbito, recorrendo em seguida a um estudo empírico que foi desenvolvido, do qual se relata aqui uma parte centrada nas práticas de sala de aula de uma professora do primeiro ciclo em formação, de modo a poder confirmar a eficácia da proposta no ensino e desenvolvimento do pensamento algébrico dos alunos. Os resultados permitem concluir que a professora interiorizou e projetou na sua prática os aspetos essenciais que conduziram ao desenvolvimento do pensamento algébrico nos alunos. O texto termina com algumas conclusões e implicações para a formação de professores.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização de Vias de Comunicação e Transportes

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the last decade, local image features have been widely used in robot visual localization. To assess image similarity, a strategy exploiting these features compares raw descriptors extracted from the current image to those in the models of places. This paper addresses the ensuing step in this process, where a combining function must be used to aggregate results and assign each place a score. Casting the problem in the multiple classifier systems framework, we compare several candidate combiners with respect to their performance in the visual localization task. A deeper insight into the potential of the sum and product combiners is provided by testing two extensions of these algebraic rules: threshold and weighted modifications. In addition, a voting method, previously used in robot visual localization, is assessed. All combiners are tested on a visual localization task, carried out on a public dataset. It is experimentally demonstrated that the sum rule extensions globally achieve the best performance. The voting method, whilst competitive to the algebraic rules in their standard form, is shown to be outperformed by both their modified versions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação apresentada à Escola Superior de Educação de Lisboa para obtenção do grau de mestre em Educação Matemática na Educação Pré-escolar e nos 1º e 2º Ciclos do Ensino Básico

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Myocardial Perfusion Gated Single Photon Emission Tomography (Gated-SPET) imaging is used for the combined evaluation of myocardial perfusion and left ventricular (LV). The purpose of this study is to evaluate the influence of the total number of counts acquired from myocardium, in the calculation of myocardial functional parameters using routine software procedures. Methods: Gated-SPET studies were simulated using Monte Carlo GATE package and NURBS phantom. Simulated data were reconstructed and processed using the commercial software package Quantitative Gated-SPECT. The Bland-Altman and Mann-Whitney-Wilcoxon tests were used to analyze the influence of the number of total counts in the calculation of LV myocardium functional parameters. Results: In studies simulated with 3MBq in the myocardium there were significant differences in the functional parameters: Left ventricular ejection fraction (LVEF), end-systolic volume (ESV), Motility and Thickness; between studies acquired with 15s/projection and 30s/projection. Simulations with 4.2MBq show significant differences in LVEF, end-diastolic volume (EDV) and Thickness. Meanwhile in the simulations with 5.4MBq and 8.4MBq the differences were statistically significant for Motility and Thickness. Conclusion: The total number of counts per simulation doesn't significantly interfere with the determination of Gated-SPET functional parameters using the administered average activity of 450MBq to 5.4MBq in myocardium.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Human virtual phantoms are being widely used to simulate and characterize the behavior of different organs, either in diagnosis stages but also to enable foreseeing the therapeutic effects obtained on a certain patient. In the present work a typical patient’s heart was simulated using XCAT2©, considering the possibility of a lesion and/or anatomical alteration being affecting the myocardium. These simulated images, were then used to carry out a set of parametric studies using Matlab©. Although performed in controlled sceneries, these studies are very important to understand and characterize the performance of the methodologies used, as well as to determine to what extent the relations between the perturbation introduced at the myocardium and the resulting simulated images can be considered conclusive.