35 resultados para Interpolation variance
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
The Wyner-Ziv video coding (WZVC) rate distortion performance is highly dependent on the quality of the side information, an estimation of the original frame, created at the decoder. This paper, characterizes the WZVC efficiency when motion compensated frame interpolation (MCFI) techniques are used to generate the side information, a difficult problem in WZVC especially because the decoder only has available some reference decoded frames. The proposed WZVC compression efficiency rate model relates the power spectral of the estimation error to the accuracy of the MCFI motion field. Then, some interesting conclusions may be derived related to the impact of the motion field smoothness and the correlation to the true motion trajectories on the compression performance.
Resumo:
Mestrado em Contabilidade e Gestão das Instituições Financeiras
Resumo:
In the Sparse Point Representation (SPR) method the principle is to retain the function data indicated by significant interpolatory wavelet coefficients, which are defined as interpolation errors by means of an interpolating subdivision scheme. Typically, a SPR grid is coarse in smooth regions, and refined close to irregularities. Furthermore, the computation of partial derivatives of a function from the information of its SPR content is performed in two steps. The first one is a refinement procedure to extend the SPR by the inclusion of new interpolated point values in a security zone. Then, for points in the refined grid, such derivatives are approximated by uniform finite differences, using a step size proportional to each point local scale. If required neighboring stencils are not present in the grid, the corresponding missing point values are approximated from coarser scales using the interpolating subdivision scheme. Using the cubic interpolation subdivision scheme, we demonstrate that such adaptive finite differences can be formulated in terms of a collocation scheme based on the wavelet expansion associated to the SPR. For this purpose, we prove some results concerning the local behavior of such wavelet reconstruction operators, which stand for SPR grids having appropriate structures. This statement implies that the adaptive finite difference scheme and the one using the step size of the finest level produce the same result at SPR grid points. Consequently, in addition to the refinement strategy, our analysis indicates that some care must be taken concerning the grid structure, in order to keep the truncation error under a certain accuracy limit. Illustrating results are presented for 2D Maxwell's equation numerical solutions.
Resumo:
One of the most efficient approaches to generate the side information (SI) in distributed video codecs is through motion compensated frame interpolation where the current frame is estimated based on past and future reference frames. However, this approach leads to significant spatial and temporal variations in the correlation noise between the source at the encoder and the SI at the decoder. In such scenario, it would be useful to design an architecture where the SI can be more robustly generated at the block level, avoiding the creation of SI frame regions with lower correlation, largely responsible for some coding efficiency losses. In this paper, a flexible framework to generate SI at the block level in two modes is presented: while the first mode corresponds to a motion compensated interpolation (MCI) technique, the second mode corresponds to a motion compensated quality enhancement (MCQE) technique where a low quality Intra block sent by the encoder is used to generate the SI by doing motion estimation with the help of the reference frames. The novel MCQE mode can be overall advantageous from the rate-distortion point of view, even if some rate has to be invested in the low quality Intra coding blocks, for blocks where the MCI produces SI with lower correlation. The overall solution is evaluated in terms of RD performance with improvements up to 2 dB, especially for high motion video sequences and long Group of Pictures (GOP) sizes.
Resumo:
Motion compensated frame interpolation (MCFI) is one of the most efficient solutions to generate side information (SI) in the context of distributed video coding. However, it creates SI with rather significant motion compensated errors for some frame regions while rather small for some other regions depending on the video content. In this paper, a low complexity Infra mode selection algorithm is proposed to select the most 'critical' blocks in the WZ frame and help the decoder with some reliable data for those blocks. For each block, the novel coding mode selection algorithm estimates the encoding rate for the Intra based and WZ coding modes and determines the best coding mode while maintaining a low encoder complexity. The proposed solution is evaluated in terms of rate-distortion performance with improvements up to 1.2 dB regarding a WZ coding mode only solution.
Resumo:
Vários estudos demonstraram que os doentes com insuficiência cardíaca congestiva (ICC) têm um compromisso da qualidade de vida relacionada com a saúde (QVRS), tendo esta, nos últimos anos, vindo a tornar-se um endpoint primário quando se analisa o impacto do tratamento de situações crónicas como a ICC. Objectivos: Avaliar as propriedades psicométricas da versão portuguesa de um novo instrumento específico para medir a QVRS na ICC em doentes hospitalizados: o Kansas City Cardiomyopathy Questionnaire (KCCQ). População e Métodos: O KCCQ foi aplicado a uma amostra consecutiva de 193 doentes internados por ICC. Destes, 105 repetiram esta avaliação 3 meses após admissão hospitalar, não havendo eventos ocorridos durante este período de tempo. A idade era 64,4± 12,4 anos (entre 21 e 88), com 72,5% a pertencer ao sexo masculino, sendo a ICC de etiologia isquémica em 42%. Resultados: Esta versão do KCCQ foi sujeita a validação estatística semelhante à americana com a avaliação da fidelidade e validade. A fidelidade foi avaliada pela consistência interna dos domínios e dos somatórios, apresentando valores Alpha de Cronbach idênticos nos vários domínios e somatórios ( =0,50 a =0,94). A validade foi analisada pela convergência, pela sensibilidade às diferenças entre grupos e pela sensibilidade à alteração da condição clínica. Avaliou-se a validade convergente de todos os domínios relacionados com funcionalidade, pela relação verificada entre estes e uma medida de funcionalidade, a classificação da New York Heart Association (NYHA), tendo-se verificado correlações significativas (p<0,01), como medida para avaliar a funcionalidade em doentes com ICC. Efectuou-se uma análise de variância entre o domínio limitação física, os somatórios e as classes da NYHA, tendo-se encontrado diferenças estatisticamente significativas (F=23,4; F=36,4; F=37,4; p=0,0001), na capacidade de descriminação da gravidade da condição clínica. Foi realizada uma segunda avaliação em 105 doentes na consulta do 3º mês após a intervenção clínica, tendo-se observado alterações significativas nas médias dos domínios avaliados entre o internamento e a consulta (diferenças de 14,9 a 30,6 numa escala de 0-100), indicando que os domínios avaliados são sensíveis à mudança da condição clínica. A correlação interdimensões da qualidade de vida que compõe este instrumento é moderada, sugerindo dimensões independentes, apoiando a sua estrutura multifactorial e a adequabilidade desta medida para a sua avaliação. Conclusão: O KCCQ é um instrumento válido, sensível à mudança e específico para medir a QVRS numa população portuguesa com miocardiopatia dilatada e ICC. ABSTRACT - Several studies have shown that patients with congestive heart failure (CHF) have a compromised health-related quality of life (HRQL), and this, in recent years, has become a primary endpoint when considering the impact of treatment of chronic conditions such as CHF. Objectives: To evaluate the psychometric properties of the Portuguese version of a new specific instrument to measure HRQL in patients hospitalized for CHF: the Kansas City Cardiomyopathy Questionnaire (KCCQ). Methods: The KCCQ was applied to a sample of 193 consecutive patients hospitalized for CHF. Of these, 105 repeated the assessment 3 months after admission, with no events during this period. Mean age was 64.4±12.4 years (21-88), and 72.5% were 72.5% male. CHF was of ischemic etiology in 42% of cases. Results: This version of the KCCQ was subjected to statistical validation, with assessment of reliability and validity, similar to the American version. Reliability was assessed by the internal consistency of the domains and summary scores, which showed similar values of Cronbach alpha (0.50-0.94). Validity was assessed by convergence, sensitivity to differences between groups and sensitivity to changes in clinical condition. We evaluated the convergent validity of all domains related to functionality, through the relationship between them and a measure of functionality, the New York Heart Association (NYHA) classification. Significant correlations were found (p<0.01) for this measure of functionality in patients with CHF. Analysis of variance between the physical limitation domain, the summary scores and NYHA class was performed and statistically significant differences were found (F=23.4; F=36.4; F=37.4, p=0.0001) in the ability to discriminate severity of clinical condition. A second evaluation was performed on 105 patients at the 3-month follow-up outpatient appointment, and significant changes were observed in the mean scores of the domains assessed between hospital admission and the clinic appointment (differences from 14.9 to 30.6 on a scale of 0-100), indicating that the domains assessed are sensitive to changes in clinical condition. The correlation between dimensions of quality of life in the KCCQ is moderate, suggesting that the dimensions are independent, supporting the multifactorial nature of HRQL and the suitability of this measure for its evaluation. Conclusion: The KCCQ is a valid instrument, sensitive to change and a specific measure of HRQL in a population with dilated cardiomyopathy and CHF.
Resumo:
Recently, several distributed video coding (DVC) solutions based on the distributed source coding (DSC) paradigm have appeared in the literature. Wyner-Ziv (WZ) video coding, a particular case of DVC where side information is made available at the decoder, enable to achieve a flexible distribution of the computational complexity between the encoder and decoder, promising to fulfill novel requirements from applications such as video surveillance, sensor networks and mobile camera phones. The quality of the side information at the decoder has a critical role in determining the WZ video coding rate-distortion (RD) performance, notably to raise it to a level as close as possible to the RD performance of standard predictive video coding schemes. Towards this target, efficient motion search algorithms for powerful frame interpolation are much needed at the decoder. In this paper, the RD performance of a Wyner-Ziv video codec is improved by using novel, advanced motion compensated frame interpolation techniques to generate the side information. The development of these type of side information estimators is a difficult problem in WZ video coding, especially because the decoder only has available some reference, decoded frames. Based on the regularization of the motion field, novel side information creation techniques are proposed in this paper along with a new frame interpolation framework able to generate higher quality side information at the decoder. To illustrate the RD performance improvements, this novel side information creation framework has been integrated in a transform domain turbo coding based Wyner-Ziv video codec. Experimental results show that the novel side information creation solution leads to better RD performance than available state-of-the-art side information estimators, with improvements up to 2 dB: moreover, it allows outperforming H.264/AVC Intra by up to 3 dB with a lower encoding complexity.
Resumo:
The growth experimented in recent years in both the variety and volume of structured products implies that banks and other financial institutions have become increasingly exposed to model risk. In this article we focus on the model risk associated with the local volatility (LV) model and with the Variance Gamma (VG) model. The results show that the LV model performs better than the VG model in terms of its ability to match the market prices of European options. Nevertheless, both models are subject to significant pricing errors when compared with the stochastic volatility framework.
Resumo:
This article presents a Markov chain framework to characterize the behavior of the CBOE Volatility Index (VIX index). Two possible regimes are considered: high volatility and low volatility. The specification accounts for deviations from normality and the existence of persistence in the evolution of the VIX index. Since the time evolution of the VIX index seems to indicate that its conditional variance is not constant over time, I consider two different versions of the model. In the first one, the variance of the index is a function of the volatility regime, whereas the second version includes an autoregressive conditional heteroskedasticity (ARCH) specification for the conditional variance of the index.
Resumo:
Based on our recent discovery of closed form formulae of efficient Mean Variance retentions in variable quota-share proportional reinsurance under group correlation, we analyzed the influence of different combination of correlation and safety loading levels on the efficient frontier, both in a single period stylized problem and in a multiperiod one.
Resumo:
O presente trabalho teve como objectivos avaliar a influência de diversas grandezas e parâmetros de ensaio no índice de fluidez de termoplásticos e calcular a incerteza associada às determinações. Numa primeira fase, procedeu-se à identificação dos principais parâmetros que influenciam a determinação do índice de fluidez, tendo sido seleccionados a temperatura do plastómetro, o peso de carga, o diâmetro da fieira, o comprimento da medição, o tipo de corte e o número de provetes. Para avaliar a influência destes parâmetros na medição do índice de fluidez, optou-se pela realização de um planeamento de experiências, o qual foi dividido em três etapas. Para o tratamento dos resultados obtidos utilizou-se como ferramenta a análise de variância. Após a completa análise dos desenhos factoriais, verificou-se que os efeitos dos factores temperatura do plastómetro, peso de carga e diâmetro da fieira apresentam um importante significado estatístico na medição do índice de fluidez. Na segunda fase, procedeu-se ao cálculo da incerteza associada às medições. Para tal seleccionou-se um dos métodos mais usuais, referido no Guia para a Expressão da Incerteza da Medição, conhecido como método GUM, e pela utilização da abordagem “passo a passo”. Inicialmente, foi necessária a construção de um modelo matemático para a medição do índice de fluidez que relacionasse os diferentes parâmetros utilizados. Foi estudado o comportamento de cada um dos parâmetros através da utilização de duas funções, recorrendo-se novamente à análise de variância. Através da lei de propagação das incertezas foi possível determinar a incerteza padrão combinada,e após estimativa do número de graus de liberdade, foi possível determinar o valor do coeficiente de expansão. Finalmente determinou-se a incerteza expandida da medição, relativa à determinação do índice de fluidez em volume.
Resumo:
The advances made in channel-capacity codes, such as turbo codes and low-density parity-check (LDPC) codes, have played a major role in the emerging distributed source coding paradigm. LDPC codes can be easily adapted to new source coding strategies due to their natural representation as bipartite graphs and the use of quasi-optimal decoding algorithms, such as belief propagation. This paper tackles a relevant scenario in distributedvideo coding: lossy source coding when multiple side information (SI) hypotheses are available at the decoder, each one correlated with the source according to different correlation noise channels. Thus, it is proposed to exploit multiple SI hypotheses through an efficient joint decoding technique withmultiple LDPC syndrome decoders that exchange information to obtain coding efficiency improvements. At the decoder side, the multiple SI hypotheses are created with motion compensated frame interpolation and fused together in a novel iterative LDPC based Slepian-Wolf decoding algorithm. With the creation of multiple SI hypotheses and the proposed decoding algorithm, bitrate savings up to 8.0% are obtained for similar decoded quality.
Resumo:
A biosensor for urea has been developed based on the observation that urea is a powerful active-site inhibitor of amidase, which catalyzes the hydrolysis of amides such as acetamide to produce ammonia and the corresponding organic acid. Cell-free extract from Pseudomonas aeruginosa was the source of amidase (acylamide hydrolase, EC 3.5.1.4) which was immobilized on a polyethersulfone membrane in the presence of glutaraldehyde; anion-selective electrode for ammonium ions was used for biosensor development. Analysis of variance was used for optimization of the biosensorresponse and showed that 30 mu L of cell-free extract containing 7.47 mg protein mL(-1), 2 mu L of glutaraldehyde (5%, v/v) and 10 mu L of gelatin (15%, w/v) exhibited the highest response. Optimization of other parameters showed that pH 7.2 and 30 min incubation time were optimum for incubation ofmembranes in urea. The biosensor exhibited a linear response in the range of 4.0-10.0 mu M urea, a detection limit of 2.0 mu M for urea, a response timeof 20 s, a sensitivity of 58.245 % per mu M urea and a storage stability of over 4 months. It was successfully used for quantification of urea in samples such as wine and milk; recovery experiments were carried out which revealed an average substrate recovery of 94.9%. The urea analogs hydroxyurea, methylurea and thiourea inhibited amidase activity by about 90%, 10% and 0%, respectively, compared with urea inhibition.
Resumo:
Mestrado em Contabilidade e Gestão das Instituições Financeiras
Resumo:
Background - The rate and fitness effects of mutations are key in understanding the evolution of every species. Traditionally, these parameters are estimated in mutation accumulation experiments where replicate lines are propagated in conditions that allow mutations to randomly accumulate without the purging effect of natural selection. These experiments have been performed with many model organisms but we still lack empirical estimates of the rate and effects of mutation in the protists. Results - We performed a mutation accumulation (MA) experiment in Tetrahymena thermophila, a species that can reproduce sexually and asexually in nature, and measured both the mean decline and variance increase in fitness of 20 lines. The results obtained with T. thermophila were compared with T. pyriformis that is an obligate asexual species. We show that MA lines of T. thermophila go to extinction at a rate of 1.25 clonal extinctions per bottleneck. In contrast, populations of T. pyriformis show a much higher resistance to extinction. Variation in gene copy number is likely to be a key factor in explaining these results, and indeed we show that T. pyriformis has a higher mean copy number per cell than T. thermophila. From fitness measurements during the MA experiment, we infer a rate of mutation to copy number variation of 0.0333 per haploid MAC genome of T. thermophila and a mean effect against copy number variation of 0.16. A strong effect of population size in the rate of fitness decline was also found, consistent with the increased power of natural selection. Conclusions - The rate of clonal extinction measured for T. thermophila is characteristic of a mutational degradation and suggests that this species must undergo sexual reproduction to avoid the deleterious effects detected in the laboratory experiments. We also suggest that an increase in chromosomal copy number associated with the phenotypic assortment of amitotic divisions can provide an alternative mechanism to escape the deleterious effect of random chromosomal copy number variation in species like T. pyriformis that lack the resetting mechanism of sexual reproduction. Our results are relevant to the understanding of cell line longevity and senescence in ciliates.