966 resultados para Fixed Block size Transform Coding


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The initial image-processing stages of visual cortex are well suited to a local (patchwise) analysis of the viewed scene. But the world's structures extend over space as textures and surfaces, suggesting the need for spatial integration. Most models of contrast vision fall shy of this process because (i) the weak area summation at detection threshold is attributed to probability summation (PS) and (ii) there is little or no advantage of area well above threshold. Both of these views are challenged here. First, it is shown that results at threshold are consistent with linear summation of contrast following retinal inhomogeneity, spatial filtering, nonlinear contrast transduction and multiple sources of additive Gaussian noise. We suggest that the suprathreshold loss of the area advantage in previous studies is due to a concomitant increase in suppression from the pedestal. To overcome this confound, a novel stimulus class is designed where: (i) the observer operates on a constant retinal area, (ii) the target area is controlled within this summation field, and (iii) the pedestal is fixed in size. Using this arrangement, substantial summation is found along the entire masking function, including the region of facilitation. Our analysis shows that PS and uncertainty cannot account for the results, and that suprathreshold summation of contrast extends over at least seven target cycles of grating. © 2007 The Royal Society.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The growth and advances made in computer technology have led to the present interest in picture processing techniques. When considering image data compression the tendency is towards trans-form source coding of the image data. This method of source coding has reached a stage where very high reductions in the number of bits representing the data can be made while still preserving image fidelity. The point has thus been reached where channel errors need to be considered, as these will be inherent in any image comnunication system. The thesis first describes general source coding of images with the emphasis almost totally on transform coding. The transform technique adopted is the Discrete Cosine Transform (DCT) which becomes common to both transform coders. Hereafter the techniques of source coding differ substantially i.e. one tech­nique involves zonal coding, the other involves threshold coding. Having outlined the theory and methods of implementation of the two source coders, their performances are then assessed first in the absence, and then in the presence, of channel errors. These tests provide a foundation on which to base methods of protection against channel errors. Six different protection schemes are then proposed. Results obtained, from each particular, combined, source and channel error protection scheme, which are described in full are then presented. Comparisons are made between each scheme and indicate the best one to use given a particular channel error rate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sparse representation of astronomical images is discussed. It is shown that a significant gain in sparsity is achieved when particular mixed dictionaries are used for approximating these types of images with greedy selection strategies. Experiments are conducted to confirm (i) the effectiveness at producing sparse representations and (ii) competitiveness, with respect to the time required to process large images. The latter is a consequence of the suitability of the proposed dictionaries for approximating images in partitions of small blocks. This feature makes it possible to apply the effective greedy selection technique called orthogonal matching pursuit, up to some block size. For blocks exceeding that size, a refinement of the original matching pursuit approach is considered. The resulting method is termed "self-projected matching pursuit," because it is shown to be effective for implementing, via matching pursuit itself, the optional backprojection intermediate steps in that approach. © 2013 Optical Society of America.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose weakly-constrained stream and block codes with tunable pattern-dependent statistics and demonstrate that the block code capacity at large block sizes is close to the the prediction obtained from a simple Markov model published earlier. We demonstrate the feasibility of the code by presenting original encoding and decoding algorithms with a complexity log-linear in the block size and with modest table memory requirements. We also show that when such codes are used for mitigation of patterning effects in optical fibre communications, a gain of about 0.5dB is possible under realistic conditions, at the expense of small redundancy 10%). © 2006 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The contributions of this dissertation are in the development of two new interrelated approaches to video data compression: (1) A level-refined motion estimation and subband compensation method for the effective motion estimation and motion compensation. (2) A shift-invariant sub-decimation decomposition method in order to overcome the deficiency of the decimation process in estimating motion due to its shift-invariant property of wavelet transform. ^ The enormous data generated by digital videos call for an intense need of efficient video compression techniques to conserve storage space and minimize bandwidth utilization. The main idea of video compression is to reduce the interpixel redundancies inside and between the video frames by applying motion estimation and motion compensation (MEMO) in combination with spatial transform coding. To locate the global minimum of the matching criterion function reasonably, hierarchical motion estimation by coarse to fine resolution refinements using discrete wavelet transform is applied due to its intrinsic multiresolution and scalability natures. ^ Due to the fact that most of the energies are concentrated in the low resolution subbands while decreased in the high resolution subbands, a new approach called level-refined motion estimation and subband compensation (LRSC) method is proposed. It realizes the possible intrablocks in the subbands for lower entropy coding while keeping the low computational loads of motion estimation as the level-refined method, thus to achieve both temporal compression quality and computational simplicity. ^ Since circular convolution is applied in wavelet transform to obtain the decomposed subframes without coefficient expansion, symmetric-extended wavelet transform is designed on the finite length frame signals for more accurate motion estimation without discontinuous boundary distortions. ^ Although wavelet transformed coefficients still contain spatial domain information, motion estimation in wavelet domain is not as straightforward as in spatial domain due to the shift variance property of the decimation process of the wavelet transform. A new approach called sub-decimation decomposition method is proposed, which maintains the motion consistency between the original frame and the decomposed subframes, improving as a consequence the wavelet domain video compressions by shift invariant motion estimation and compensation. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study theoretically the effect of a new type of blocklike positional disorder on the effective electromagnetic properties of one-dimensional chains of resonant, high-permittivity dielectric particles, where particles are arranged into perfectly well-ordered blocks whose relative position is a random variable. This creates a finite order correlation length that mimics the situation encountered in metamaterials fabricated through self-assembled techniques, whose structures often display short-range order between near neighbors but long-range disorder, due to stacking defects. Using a spectral theory approach combined with a principal component statistical analysis, we study, in the long-wavelength regime, the evolution of the electromagnetic response when the composite filling fraction and the block size are changed. Modifications in key features of the resonant response (amplitude, width, etc.) are investigated, showing a regime transition for a filling fraction around 50%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Centromeres are essential chromosomal loci at which kinetochore formation occurs for spindle fiber attachment during mitosis and meiosis, guiding proper segregation of chromosomes. In humans, centromeres are located at large arrays of alpha satellite DNA, contributing to but not defining centromere function. The histone variant CENP-A assembles at alpha satellite DNA, epigenetically defining the centromere. CENP-A containing chromatin exists as an essential domain composed of blocks of CENP-A nucleosomes interspersed with blocks of H3 nucleosomes, and is surrounded by pericentromeric heterochromatin. In order to maintain genomic stability, the CENP-A domain is propagated epigenetically over each cell division; disruption of propagation is associated with chromosome instabilities such as aneuploidy, found in birth defects and in cancer.

The CENP-A chromatin domain occupies 30-45% of the alpha satellite array, varying in genomic distance according to the underlying array size. However, the molecular mechanisms that control assembly and organization of CENP-A chromatin within its genomic context remain unclear. The domain may shift, expand, or contract, as CENP-A is loaded and dispersed each cell cycle. We hypothesized that in order to maintain genome stability, the centromere is inherited as static chromatin domains, maintaining size and position within the pericentric heterochromatin. Utilizing stretched chromatin fibers, I found that CENP-A chromatin is limited to a sub-region of the alpha satellite array that is fixed in size and location through the cell cycle and across populations.

The average amount of CENP-A at human centromeres is largely consistent, implying that the variation in size of CENP-A domains reflects variations in the number of CENP-A subdomains and/or the density of CENP-A nucleosomes. Multi-color nascent protein labeling experiments were utilized to examine the distribution and incorporation of distinct pools of CENP-A over several cell cycles. I found that in each cell cycle there is independent CENP-A distribution, occurring equally between sister centromeres across all chromosomes, in similar quantities. Furthermore, centromere inheritance is achieved through specific placement of CENP-A, following an oscillating pattern that fixes the location and size of the CENP-A domain. These results suggest that spatial and temporal dynamics of CENP-A are important for maintaining centromere and genome stability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Delirium is frequently diagnosed in critically ill patients and is associated with poor clinical outcomes. Haloperidol is the most commonly used drug for delirium despite little evidence of its effectiveness. The aim of this study was to establish whether early treatment with haloperidol would decrease the time that survivors of critical illness spent in delirium or coma. Methods: We did this double-blind, placebo-controlled randomised trial in a general adult intensive care unit (ICU). Critically ill patients (≥18 years) needing mechanical ventilation within 72 h of admission were enrolled. Patients were randomised (by an independent nurse, in 1:1 ratio, with permuted block size of four and six, using a centralised, secure web-based randomisation service) to receive haloperidol 2·5 mg or 0·9% saline placebo intravenously every 8 h, irrespective of coma or delirium status. Study drug was discontinued on ICU discharge, once delirium-free and coma-free for 2 consecutive days, or after a maximum of 14 days of treatment, whichever came first. Delirium was assessed using the confusion assessment method for the ICU (CAM-ICU). The primary outcome was delirium-free and coma-free days, defined as the number of days in the first 14 days after randomisation during which the patient was alive without delirium and not in coma from any cause. Patients who died within the 14 day study period were recorded as having 0 days free of delirium and coma. ICU clinical and research staff and patients were masked to treatment throughout the study. Analyses were by intention to treat. This trial is registered with the International Standard Randomised Controlled Trial Registry, number ISRCTN83567338. Findings: 142 patients were randomised, 141 were included in the final analysis (71 haloperidol, 70 placebo). Patients in the haloperidol group spent about the same number of days alive, without delirium, and without coma as did patients in the placebo group (median 5 days [IQR 0-10] vs 6 days [0-11] days; p=0·53). The most common adverse events were oversedation (11 patients in the haloperidol group vs six in the placebo group) and QTc prolongation (seven patients in the haloperidol group vs six in the placebo group). No patient had a serious adverse event related to the study drug. Interpretation: These results do not support the hypothesis that haloperidol modifies duration of delirium in critically ill patients. Although haloperidol can be used safely in this population of patients, pending the results of trials in progress, the use of intravenous haloperidol should be reserved for short-term management of acute agitation. Funding: National Institute for Health Research. © 2013 Elsevier Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose weakly-constrained stream and block codes with tunable pattern-dependent statistics and demonstrate that the block code capacity at large block sizes is close to the the prediction obtained from a simple Markov model published earlier. We demonstrate the feasibility of the code by presenting original encoding and decoding algorithms with a complexity log-linear in the block size and with modest table memory requirements. We also show that when such codes are used for mitigation of patterning effects in optical fibre communications, a gain of about 0.5dB is possible under realistic conditions, at the expense of small redundancy (≈10%). © 2010 IEEE

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Edoxaban, an oral factor Xa inhibitor, is non-inferior for prevention of stroke and systemic embolism in patients with atrial fibrillation and is associated with less bleeding than well controlled warfarin therapy. Few safety data about edoxaban in patients undergoing electrical cardioversion are available. Methods We did a multicentre, prospective, randomised, open-label, blinded-endpoint evaluation trial in 19 countries with 239 sites comparing edoxaban 60 mg per day with enoxaparin–warfarin in patients undergoing electrical cardioversion of non-valvular atrial fibrillation. The dose of edoxaban was reduced to 30 mg per day if one or more factors (creatinine clearance 15–50 mL/min, low bodyweight [≤60 kg], or concomitant use of P-glycoprotein inhibitors) were present. Block randomisation (block size four)—stratified by cardioversion approach (transoesophageal echocardiography [TEE] or not), anticoagulant experience, selected edoxaban dose, and region—was done through a voice-web system. The primary efficacy endpoint was a composite of stroke, systemic embolic event, myocardial infarction, and cardiovascular mortality, analysed by intention to treat. The primary safety endpoint was major and clinically relevant non-major (CRNM) bleeding in patients who received at least one dose of study drug. Follow-up was 28 days on study drug after cardioversion plus 30 days to assess safety. This trial is registered with ClinicalTrials.gov, number NCT02072434. Findings Between March 25, 2014, and Oct 28, 2015, 2199 patients were enrolled and randomly assigned to receive edoxaban (n=1095) or enoxaparin–warfarin (n=1104). The mean age was 64 years (SD 10·54) and mean CHA2DS2-VASc score was 2·6 (SD 1·4). Mean time in therapeutic range on warfarin was 70·8% (SD 27·4). The primary efficacy endpoint occurred in five (<1%) patients in the edoxaban group versus 11 (1%) in the enoxaparin–warfarin group (odds ratio [OR] 0·46, 95% CI 0·12–1·43). The primary safety endpoint occurred in 16 (1%) of 1067 patients given edoxaban versus 11 (1%) of 1082 patients given enoxaparin–warfarin (OR 1·48, 95% CI 0·64–3·55). The results were independent of the TEE-guided strategy and anticoagulation status. Interpretation ENSURE-AF is the largest prospective randomised clinical trial of anticoagulation for cardioversion of patients with non-valvular atrial fibrillation. Rates of major and CRNM bleeding and thromboembolism were low in the two treatment groups. Funding Daiichi Sankyo provided financial support for the study. © 2016 Elsevier Ltd

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Recently, polynomial phase modulation (PPM) was shown to be a power- and bandwidth-efficient modulation format. These two characteristics are in high demand nowadays specially in mobile applications, where devices with size, weight, and power (SWaP) constraints are common. In this paper, we propose implementing a full-diversity quasiorthogonal space-time block code (QOSTBC) using polynomial phase signals as modulation format. QOSTBCs along with PPM are used in order to improve the power efficiency of communication systems with four transmit antennas. We obtain the optimal PPM constellations that ensure full diversity and maximize the QOSTBC's minimum coding gain distance. Simulation results show that by using QOSTBCs along with a properly selected PPM constellation, full diversity in flat fading channels and thus low BER at high signal-to-noise ratios (SNR) can be ensured. More importantly, it is also shown that QOSTBCs using PPM achieve a better error performance than those using conventional modulation formats.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Wyner-Ziv (WZ) video coding is a particular case of distributed video coding, the recent video coding paradigm based on the Slepian-Wolf and Wyner-Ziv theorems that exploits the source correlation at the decoder and not at the encoder as in predictive video coding. Although many improvements have been done over the last years, the performance of the state-of-the-art WZ video codecs still did not reach the performance of state-of-the-art predictive video codecs, especially for high and complex motion video content. This is also true in terms of subjective image quality mainly because of a considerable amount of blocking artefacts present in the decoded WZ video frames. This paper proposes an adaptive deblocking filter to improve both the subjective and objective qualities of the WZ frames in a transform domain WZ video codec. The proposed filter is an adaptation of the advanced deblocking filter defined in the H.264/AVC (advanced video coding) standard to a WZ video codec. The results obtained confirm the subjective quality improvement and objective quality gains that can go up to 0.63 dB in the overall for sequences with high motion content when large group of pictures are used.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Wyner - Ziv (WZ) video coding is a particular case of distributed video coding (DVC), the recent video coding paradigm based on the Slepian - Wolf and Wyner - Ziv theorems which exploits the source temporal correlation at the decoder and not at the encoder as in predictive video coding. Although some progress has been made in the last years, WZ video coding is still far from the compression performance of predictive video coding, especially for high and complex motion contents. The WZ video codec adopted in this study is based on a transform domain WZ video coding architecture with feedback channel-driven rate control, whose modules have been improved with some recent coding tools. This study proposes a novel motion learning approach to successively improve the rate-distortion (RD) performance of the WZ video codec as the decoding proceeds, making use of the already decoded transform bands to improve the decoding process for the remaining transform bands. The results obtained reveal gains up to 2.3 dB in the RD curves against the performance for the same codec without the proposed motion learning approach for high motion sequences and long group of pictures (GOP) sizes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A novel high throughput and scalable unified architecture for the computation of the transform operations in video codecs for advanced standards is presented in this paper. This structure can be used as a hardware accelerator in modern embedded systems to efficiently compute all the two-dimensional 4 x 4 and 2 x 2 transforms of the H.264/AVC standard. Moreover, its highly flexible design and hardware efficiency allows it to be easily scaled in terms of performance and hardware cost to meet the specific requirements of any given video coding application. Experimental results obtained using a Xilinx Virtex-5 FPGA demonstrated the superior performance and hardware efficiency levels provided by the proposed structure, which presents a throughput per unit of area relatively higher than other similar recently published designs targeting the H.264/AVC standard. Such results also showed that, when integrated in a multi-core embedded system, this architecture provides speedup factors of about 120x concerning pure software implementations of the transform algorithms, therefore allowing the computation, in real-time, of all the above mentioned transforms for Ultra High Definition Video (UHDV) sequences (4,320 x 7,680 @ 30 fps).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The long term evolution (LTE) is one of the latest standards in the mobile communications market. To achieve its performance, LTE networks use several techniques, such as multi-carrier technique, multiple-input-multiple-output and cooperative communications. Inside cooperative communications, this paper focuses on the fixed relaying technique, presenting a way for determining the best position to deploy the relay station (RS), from a set of empirical good solutions, and also to quantify the associated performance gain using different cluster size configurations. The best RS position was obtained through realistic simulations, which set it as the middle of the cell's circumference arc. Additionally, it also confirmed that network's performance is improved when the number of RSs is increased. It was possible to conclude that, for each deployed RS, the percentage of area served by an RS increases about 10 %. Furthermore, the mean data rate in the cell has been increased by approximately 60 % through the use of RSs. Finally, a given scenario with a larger number of RSs, can experience the same performance as an equivalent scenario without RSs, but with higher reuse distance. This conduces to a compromise solution between RS installation and cluster size, in order to maximize capacity, as well as performance.