922 resultados para Error Correction Coding, Error Resilience, MPEG-4, Video Coding
Resumo:
Mentre si svolgono operazioni su dei qubit, possono avvenire vari errori, modificando così l’informazione da essi contenuta. La Quantum Error Correction costruisce algoritmi che permettono di tollerare questi errori e proteggere l’informazione che si sta elaborando. Questa tesi si focalizza sui codici a 3 qubit, che possono correggere un errore di tipo bit-flip o un errore di tipo phase-flip. Più precisamente, all’interno di questi algoritmi, l’attenzione è posta sulla procedura di encoding, che punta a proteggere meglio dagli errori l’informazione contenuta da un qubit, e la syndrome measurement, che specifica su quale qubit è avvenuto un errore senza alterare lo stato del sistema. Inoltre, sfruttando la procedura della syndrome measurement, è stata stimata la probabilità di errore di tipo bit-flip e phase-flip su un qubit attraverso l’utilizzo della IBM quantum experience.
Resumo:
77
Resumo:
Artesian confined aquifers do not need pumping energy, and water from the aquifer flows naturally at the wellhead. This study proposes correcting the method for analyzing flowing well tests presented by Jacob and Lohman (1952) by considering the head losses due to friction in the well casing. The application of the proposed correction allowed the determination of a transmissivity (T = 411 m(2)/d) and storage coefficient (S = 3 x 10(-4)) which appear to be representative for the confined Guarani Aquifer in the study area. Ignoring the correction due to head losses in the well casing, the error in transmissivity evaluation is about 18%. For the storage coefficient the error is of 5 orders of magnitude, resulting in physically unacceptable value. The effect of the proposed correction on the calculated radius of the cone of depression and corresponding well interference is also discussed.
Resumo:
The purpose of this article is to present a quantitative analysis of the human failure contribution in the collision and/or grounding of oil tankers, considering the recommendation of the ""Guidelines for Formal Safety Assessment"" of the International Maritime Organization. Initially, the employed methodology is presented, emphasizing the use of the technique for human error prediction to reach the desired objective. Later, this methodology is applied to a ship operating on the Brazilian coast and, thereafter, the procedure to isolate the human actions with the greatest potential to reduce the risk of an accident is described. Finally, the management and organizational factors presented in the ""International Safety Management Code"" are associated with these selected actions. Therefore, an operator will be able to decide where to work in order to obtain an effective reduction in the probability of accidents. Even though this study does not present a new methodology, it can be considered as a reference in the human reliability analysis for the maritime industry, which, in spite of having some guides for risk analysis, has few studies related to human reliability effectively applied to the sector.
Resumo:
Background: Biochemical analysis of fluid is the primary laboratory approach hi pleural effusion diagnosis. Standardization of the steps between collection and laboratorial analyses are fundamental to maintain the quality of the results. We evaluated the influence of temperature and storage time on sample stability. Methods: Pleural fluid from 30 patients was submitted to analyses of proteins, albumin, lactic dehydrogenase (LDH), cholesterol, triglycerides, and glucose. Aliquots were stored at 21 degrees, 4 degrees, and-20 degrees C, and concentrations were determined after 1, 2, 3, 4, 7, and 14 days. LDH isoenzymes were quantified in 7 random samples. Results: Due to the instability of isoenzymes 4 and 5, a decrease in LDH was observed in the first 24 h in samples maintained at -20 degrees C and after 2 days when maintained at 4 degrees C. Aside from glucose, all parameters were stable for up to at least day 4 when stored at room temperature or 4 degrees C. Conclusions: Temperature and storage time are potential preanalytical errors in pleural fluid analyses, mainly if we consider the instability of glucose and LDH. The ideal procedure is to execute all the tests immediately after collection. However, most of the tests can be done in refrigerated sample;, excepting LDH analysis. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Parenteral anticoagulation is a cornerstone in the management of venous and arterial thrombosis. Unfractionated heparin has a wide dose/response relationship, requiring frequent and troublesome laboratorial follow-up. Because of all these factors, low-molecular-weight heparin use has been increasing. Inadequate dosage has been pointed out as a potential problem because the use of subjectively estimated weight instead of real measured weight is common practice in the emergency department (ED). To evaluate the impact of inadequate weight estimation on enoxaparin dosage, we investigated the adequacy of anticoagulation of patients in a tertiary ED where subjective weight estimation is common practice. We obtained the estimated, informed, and measured weight of 28 patients in need of parenteral anticoagulation. Basal and steady-state (after the second subcutaneous shot of enoxaparin) anti-Xa activity was obtained as a measure of adequate anticoagulation. The patients were divided into 2 groups according the anticoagulation adequacy. From the 28 patients enrolled, 75% (group 1, n = 21) received at least 0.9 mg/kg per dose BID and 25% (group 2, n = 7) received less than 0.9 mg/kg per dose BID of enoxaparin. Only 4 (14.3%) of all patients had anti-Xa activity less than the inferior limit of the therapeutic range (<0.5 UI/mL), all of them from group 2. In conclusion, when weight estimation was used to determine the enoxaparin dosage, 25% of the patients were inadequately anticoagulated (anti-Xa activity <0.5 UI/mL) during the initial crucial phase of treatment. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
Activated sludge models are used extensively in the study of wastewater treatment processes. While various commercial implementations of these models are available, there are many people who need to code models themselves using the simulation packages available to them, Quality assurance of such models is difficult. While benchmarking problems have been developed and are available, the comparison of simulation data with that of commercial models leads only to the detection, not the isolation of errors. To identify the errors in the code is time-consuming. In this paper, we address the problem by developing a systematic and largely automated approach to the isolation of coding errors. There are three steps: firstly, possible errors are classified according to their place in the model structure and a feature matrix is established for each class of errors. Secondly, an observer is designed to generate residuals, such that each class of errors imposes a subspace, spanned by its feature matrix, on the residuals. Finally. localising the residuals in a subspace isolates coding errors. The algorithm proved capable of rapidly and reliably isolating a variety of single and simultaneous errors in a case study using the ASM 1 activated sludge model. In this paper a newly coded model was verified against a known implementation. The method is also applicable to simultaneous verification of any two independent implementations, hence is useful in commercial model development.
Resumo:
Time motion analysis is extensively used to assess the demands of team sports. At present there is only limited information on the reliability of measurements using this analysis tool. The aim of this study was to establish the reliability of an individual observer's time motion analysis of rugby union. Ten elite level rugby players were individually tracked in Southern Hemisphere Super 12 matches using a digital video camera. The video footage was subsequently analysed by a single researcher on two occasions one month apart. The test-retest reliability was quantified as the typical error of measurement (TEM) and rated as either good (10% TEM). The total time spent in the individual movements of walking, jogging, striding, sprinting, static exertion and being stationary had moderate to poor reliability (5.8-11.1% TEM). The frequency of individual movements had good to poor reliability (4.3-13.6% TEM), while the mean duration of individual movements had moderate reliability (7.1-9.3% TEM). For the individual observer in the present investigation, time motion analysis was shown to be moderately reliable as an evaluation tool for examining the movement patterns of players in competitive rugby. These reliability values should be considered when assessing the movement patterns of rugby players within competition.
Resumo:
Pectus excavatum is the most common deformity of the thorax. A minimally invasive surgical correction is commonly carried out to remodel the anterior chest wall by using an intrathoracic convex prosthesis in the substernal position. The process of prosthesis modeling and bending still remains an area of improvement. The authors developed a new system, i3DExcavatum, which can automatically model and bend the bar preoperatively based on a thoracic CT scan. This article presents a comparison between automatic and manual bending. The i3DExcavatum was used to personalize prostheses for 41 patients who underwent pectus excavatum surgical correction between 2007 and 2012. Regarding the anatomical variations, the soft-tissue thicknesses external to the ribs show that both symmetric and asymmetric patients always have asymmetric variations, by comparing the patients’ sides. It highlighted that the prosthesis bar should be modeled according to each patient’s rib positions and dimensions. The average differences between the skin and costal line curvature lengths were 84 ± 4 mm and 96 ± 11 mm, for male and female patients, respectively. On the other hand, the i3DExcavatum ensured a smooth curvature of the surgical prosthesis and was capable of predicting and simulating a virtual shape and size of the bar for asymmetric and symmetric patients. In conclusion, the i3DExcavatum allows preoperative personalization according to the thoracic morphology of each patient. It reduces surgery time and minimizes the margin error introduced by the manually bent bar, which only uses a template that copies the chest wall curvature.
Resumo:
Pectus excavatum is the most common deformity of the thorax. A minimally invasive surgical correction is commonly carried out to remodel the anterior chest wall by using an intrathoracic convex prosthesis in the substernal position. The process of prosthesis modeling and bending still remains an area of improvement. The authors developed a new system, i3DExcavatum, which can automatically model and bend the bar preoperatively based on a thoracic CT scan. This article presents a comparison between automatic and manual bending. The i3DExcavatum was used to personalize prostheses for 41 patients who underwent pectus excavatum surgical correction between 2007 and 2012. Regarding the anatomical variations, the soft-tissue thicknesses external to the ribs show that both symmetric and asymmetric patients always have asymmetric variations, by comparing the patients’ sides. It highlighted that the prosthesis bar should be modeled according to each patient’s rib positions and dimensions. The average differences between the skin and costal line curvature lengths were 84 ± 4 mm and 96 ± 11 mm, for male and female patients, respectively. On the other hand, the i3DExcavatum ensured a smooth curvature of the surgical prosthesis and was capable of predicting and simulating a virtual shape and size of the bar for asymmetric and symmetric patients. In conclusion, the i3DExcavatum allows preoperative personalization according to the thoracic morphology of each patient. It reduces surgery time and minimizes the margin error introduced by the manually bent bar, which only uses a template that copies the chest wall curvature.
Resumo:
Pectus excavatum is the most common deformity of the thorax. A minimally invasive surgical correction is commonly carried out to remodel the anterior chest wall by using an intrathoracic convex prosthesis in the substernal position. The process of prosthesis modeling and bending still remains an area of improvement. The authors developed a new system, i3DExcavatum, which can automatically model and bend the bar preoperatively based on a thoracic CT scan. This article presents a comparison between automatic and manual bending. The i3DExcavatum was used to personalize prostheses for 41 patients who underwent pectus excavatum surgical correction between 2007 and 2012. Regarding the anatomical variations, the soft-tissue thicknesses external to the ribs show that both symmetric and asymmetric patients always have asymmetric variations, by comparing the patients’ sides. It highlighted that the prosthesis bar should be modeled according to each patient’s rib positions and dimensions. The average differences between the skin and costal line curvature lengths were 84 ± 4 mm and 96 ± 11 mm, for male and female patients, respectively. On the other hand, the i3DExcavatum ensured a smooth curvature of the surgical prosthesis and was capable of predicting and simulating a virtual shape and size of the bar for asymmetric and symmetric patients. In conclusion, the i3DExcavatum allows preoperative personalization according to the thoracic morphology of each patient. It reduces surgery time and minimizes the margin error introduced by the manually bent bar, which only uses a template that copies the chest wall curvature.
Resumo:
Motion compensated frame interpolation (MCFI) is one of the most efficient solutions to generate side information (SI) in the context of distributed video coding. However, it creates SI with rather significant motion compensated errors for some frame regions while rather small for some other regions depending on the video content. In this paper, a low complexity Infra mode selection algorithm is proposed to select the most 'critical' blocks in the WZ frame and help the decoder with some reliable data for those blocks. For each block, the novel coding mode selection algorithm estimates the encoding rate for the Intra based and WZ coding modes and determines the best coding mode while maintaining a low encoder complexity. The proposed solution is evaluated in terms of rate-distortion performance with improvements up to 1.2 dB regarding a WZ coding mode only solution.
Resumo:
Wyner - Ziv (WZ) video coding is a particular case of distributed video coding (DVC), the recent video coding paradigm based on the Slepian - Wolf and Wyner - Ziv theorems which exploits the source temporal correlation at the decoder and not at the encoder as in predictive video coding. Although some progress has been made in the last years, WZ video coding is still far from the compression performance of predictive video coding, especially for high and complex motion contents. The WZ video codec adopted in this study is based on a transform domain WZ video coding architecture with feedback channel-driven rate control, whose modules have been improved with some recent coding tools. This study proposes a novel motion learning approach to successively improve the rate-distortion (RD) performance of the WZ video codec as the decoding proceeds, making use of the already decoded transform bands to improve the decoding process for the remaining transform bands. The results obtained reveal gains up to 2.3 dB in the RD curves against the performance for the same codec without the proposed motion learning approach for high motion sequences and long group of pictures (GOP) sizes.
Resumo:
Hoje em dia, há cada vez mais informação audiovisual e as transmissões ou ficheiros multimédia podem ser partilhadas com facilidade e eficiência. No entanto, a adulteração de conteúdos vídeo, como informação financeira, notícias ou sessões de videoconferência utilizadas num tribunal, pode ter graves consequências devido à importância desse tipo de informação. Surge então, a necessidade de assegurar a autenticidade e a integridade da informação audiovisual. Nesta dissertação é proposto um sistema de autenticação de vídeo H.264/Advanced Video Coding (AVC), denominado Autenticação de Fluxos utilizando Projecções Aleatórias (AFPA), cujos procedimentos de autenticação, são realizados ao nível de cada imagem do vídeo. Este esquema permite um tipo de autenticação mais flexível, pois permite definir um limite máximo de modificações entre duas imagens. Para efectuar autenticação é utilizada uma nova técnica de autenticação de imagens, que combina a utilização de projecções aleatórias com um mecanismo de correcção de erros nos dados. Assim é possível autenticar cada imagem do vídeo, com um conjunto reduzido de bits de paridade da respectiva projecção aleatória. Como a informação de vídeo é tipicamente, transportada por protocolos não fiáveis pode sofrer perdas de pacotes. De forma a reduzir o efeito das perdas de pacotes, na qualidade do vídeo e na taxa de autenticação, é utilizada Unequal Error Protection (UEP). Para validação e comparação dos resultados implementou-se um sistema clássico que autentica fluxos de vídeo de forma típica, ou seja, recorrendo a assinaturas digitais e códigos de hash. Ambos os esquemas foram avaliados, relativamente ao overhead introduzido e da taxa de autenticação. Os resultados mostram que o sistema AFPA, utilizando um vídeo com qualidade elevada, reduz o overhead de autenticação em quatro vezes relativamente ao esquema que utiliza assinaturas digitais e códigos de hash.
Resumo:
The advances made in channel-capacity codes, such as turbo codes and low-density parity-check (LDPC) codes, have played a major role in the emerging distributed source coding paradigm. LDPC codes can be easily adapted to new source coding strategies due to their natural representation as bipartite graphs and the use of quasi-optimal decoding algorithms, such as belief propagation. This paper tackles a relevant scenario in distributedvideo coding: lossy source coding when multiple side information (SI) hypotheses are available at the decoder, each one correlated with the source according to different correlation noise channels. Thus, it is proposed to exploit multiple SI hypotheses through an efficient joint decoding technique withmultiple LDPC syndrome decoders that exchange information to obtain coding efficiency improvements. At the decoder side, the multiple SI hypotheses are created with motion compensated frame interpolation and fused together in a novel iterative LDPC based Slepian-Wolf decoding algorithm. With the creation of multiple SI hypotheses and the proposed decoding algorithm, bitrate savings up to 8.0% are obtained for similar decoded quality.