935 resultados para Volumetric error
Resumo:
Obstructive lung diseases of different etiologies present with progressive peripheral airway involvement. The peripheral airways, known as the silent lung zone, are not adequately evaluated with conventional function tests. The principle of gas washout has been used to detect pulmonary ventilation inhomogeneity and to estimate the location of the underlying disease process. Volumetric capnography (VC) analyzes the pattern of CO2 elimination as a function of expired volume. To measure normalized phase 3 slopes with VC in patients with non-cystic fibrosis bronchiectasis (NCB) and in bronchitic patients with chronic obstructive pulmonary disease (COPD) in order to compare the slopes obtained for the groups. NCB and severe COPD were enrolled sequentially from an outpatient clinic (Hospital of the State University of Campinas). A control group was established for the NCB group, paired by sex and age. All subjects performed spirometry, VC, and the 6-Minute Walk Test (6MWT). Two comparisons were made: NCB group versus its control group, and NCB group versus COPD group. The project was approved by the ethical committee of the institution. Statistical tests used were Wilcoxon or Student's t-test; P<0.05 was considered to be a statistically significant difference. Concerning the NCB group (N=20) versus the control group (N=20), significant differences were found in body mass index and in several functional variables (spirometric, VC, 6MWT) with worse results observed in the NCB group. In the comparison between the COPD group (N=20) versus the NCB group, although patients with COPD had worse spirometric and 6MWT values, the capnographic variables mean phase 2 slope (Slp2), mean phase 3 slope normalized by the mean expiratory volume, or mean phase 3 slope normalized by the end-tidal CO2 concentration were similar. These findings may indicate that the gas elimination curves are not sensitive enough to monitor the severity of structural abnormalities. The role of normalized phase 3 slope may be worth exploring as a more sensitive index of small airway disease, even though it may not be equally sensitive in discriminating the severity of the alterations.
Resumo:
77
Resumo:
Background: Genome wide association studies (GWAS) are becoming the approach of choice to identify genetic determinants of complex phenotypes and common diseases. The astonishing amount of generated data and the use of distinct genotyping platforms with variable genomic coverage are still analytical challenges. Imputation algorithms combine directly genotyped markers information with haplotypic structure for the population of interest for the inference of a badly genotyped or missing marker and are considered a near zero cost approach to allow the comparison and combination of data generated in different studies. Several reports stated that imputed markers have an overall acceptable accuracy but no published report has performed a pair wise comparison of imputed and empiric association statistics of a complete set of GWAS markers. Results: In this report we identified a total of 73 imputed markers that yielded a nominally statistically significant association at P < 10(-5) for type 2 Diabetes Mellitus and compared them with results obtained based on empirical allelic frequencies. Interestingly, despite their overall high correlation, association statistics based on imputed frequencies were discordant in 35 of the 73 (47%) associated markers, considerably inflating the type I error rate of imputed markers. We comprehensively tested several quality thresholds, the haplotypic structure underlying imputed markers and the use of flanking markers as predictors of inaccurate association statistics derived from imputed markers. Conclusions: Our results suggest that association statistics from imputed markers showing specific MAF (Minor Allele Frequencies) range, located in weak linkage disequilibrium blocks or strongly deviating from local patterns of association are prone to have inflated false positive association signals. The present study highlights the potential of imputation procedures and proposes simple procedures for selecting the best imputed markers for follow-up genotyping studies.
Resumo:
BACKGROUND: Xylitol is a sugar alcohol (polyalcohol) with many interesting properties for pharmaceutical and food products. It is currently produced by a chemical process, which has some disadvantages such as high energy requirement. Therefore microbiological production of xylitol has been studied as an alternative, but its viability is dependent on optimisation of the fermentation variables. Among these, aeration is fundamental, because xylitol is produced only under adequate oxygen availability. In most experiments with xylitol-producing yeasts, low oxygen transfer volumetric coefficient (K(L)a) values are used to maintain microaerobic conditions. However, in the present study the use of relatively high K(L)a values resulted in high xylitol production. The effect of aeration was also evaluated via the profiles of xylose reductase (XR) and xylitol clehydrogenase (XD) activities during the experiments. RESULTS: The highest XR specific activity (1.45 +/- 0.21 U mg(protein)(-1)) was achieved during the experiment with the lowest K(L)a value (12 h(-1)), while the highest XD specific activity (0.19 +/- 0.03 U mg(protein)(-1)) was observed with a K(L)a value of 25 h(-1). Xylitol production was enhanced when K(L)a was increased from 12 to 50 h(-1), which resulted in the best condition observed, corresponding to a xylitol volumetric productivity of 1.50 +/- 0.08 g(xylitol) L(-1) h(-1) and an efficiency of 71 +/- 6.0%. CONCLUSION: The results showed that the enzyme activities during xylitol bioproduction depend greatly on the initial KLa value (oxygen availability). This finding supplies important information for further studies in molecular biology and genetic engineering aimed at improving xylitol bioproduction. (C) 2008 Society of Chemical Industry
Resumo:
In this study, the innovation approach is used to estimate the measurement total error associated with power system state estimation. This is required because the power system equations are very much correlated with each other and as a consequence part of the measurements errors is masked. For that purpose an index, innovation index (II), which provides the quantity of new information a measurement contains is proposed. A critical measurement is the limit case of a measurement with low II, it has a zero II index and its error is totally masked. In other words, that measurement does not bring any innovation for the gross error test. Using the II of a measurement, the masked gross error by the state estimation is recovered; then the total gross error of that measurement is composed. Instead of the classical normalised measurement residual amplitude, the corresponding normalised composed measurement residual amplitude is used in the gross error detection and identification test, but with m degrees of freedom. The gross error processing turns out to be very simple to implement, requiring only few adaptations to the existing state estimation software. The IEEE-14 bus system is used to validate the proposed gross error detection and identification test.
Resumo:
With the relentless quest for improved performance driving ever tighter tolerances for manufacturing, machine tools are sometimes unable to meet the desired requirements. One option to improve the tolerances of machine tools is to compensate for their errors. Among all possible sources of machine tool error, thermally induced errors are, in general for newer machines, the most important. The present work demonstrates the evaluation and modelling of the behaviour of the thermal errors of a CNC cylindrical grinding machine during its warm-up period.
Resumo:
This paper presents a technological viability study of wastewater treatment in an automobile industry by an anaerobic sequencing batch biofilm reactor containing immobilized biomass (AnSBBR) with a draft tube. The reactor was operated in 8-h cycles, with agitation of 400 rpm, at 30 degrees C and treating 2.0 L wastewater per cycle. Initially the efficiency and stability of the reactor were studied when supplied with nutrients and alkalinity. Removal efficiency of 88% was obtained at volumetric loading rate (VLR) of 3.09 mg COD/L day. When VLR was increased to 6.19 mg COD/L day the system presented stable operation with reduction in efficiency of 71%. In a second stage the AnSBBR was operated treating wastewater in natura, i.e., without nutrients supplementation, only with alkalinity, thereby changing feed strategy. The first strategy consisted in feeding 2.0 L batch wise (10 min), the second in feeding 1.0 L of influent batch wise (10 min) and an additional 1.0 L fed-batch wise (4 h), both dewatering 2.0 L of the effluent in 10 min. The third one maintained 1.0 L of treated effluent in the reactor, without discharging, and 1.0 L of influent was fed fed-batch wise (4 h) with dewatering 1.0 L of the effluent in 10 min. For all implemented strategies (VLR of 1.40, 2.57 and 2.61 mg COD/L day) the system presented stability and removal efficiency of approximately 80%. These results show that the AnSBBR presents operational flexibility, as the influent can be fed according to industry availability. In industrial processes this is a considerable advantage, as the influent may be prone to variations. Moreover, for all the investigated conditions the kinetic parameters were obtained from fitting a first-order model to the profiles of organic matter, total volatile acids and methane concentrations. Analysis of the kinetic parameters showed that the best strategy is feeding 1.0 L of influent batchwise (10 min) and 1.0 L fed-batch wise (4 h) in 8-h cycle. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
We describe a one-time signature scheme based on the hardness of the syndrome decoding problem, and prove it secure in the random oracle model. Our proposal can be instantiated on general linear error correcting codes, rather than restricted families like alternant codes for which a decoding trapdoor is known to exist. (C) 2010 Elsevier Inc. All rights reserved,
Resumo:
The volumetric reconstruction technique presented in this paper employs a two-camera stereoscopic particle image velocimetry (SPIV) system in order to reconstruct the mean flow behind a fixed cylinder fitted with helical strakes, which are commonly used to suppress vortex-induced vibrations (VIV). The technique is based on the measurement of velocity fields at equivalent adjacent planes that results in pseudo volumetric fields. The main advantage over proper volumetric techniques is the avoidance of additional equipment and complexity. The averaged velocity fields behind the straked cylinders and the geometrical periodicity of the three-start configuration are used to further simplify the reconstruction process. Two straked cylindrical models with the same pitch (p = 10d) and two different heights (h = 0.1 and 0.2d) are tested. The reconstructed flow shows that the strakes introduce in the wake flow a well-defined wavelength of one-third of the pitch. Measurements of hydrodynamic forces, fluctuating velocity, vortex formation length, and vortex shedding frequency show the interdependence of the wake parameters. The vortex formation length is increased by the strakes, which is an important effect for the suppression of vortex-induced vibrations. The results presented complement previous investigations concerning the effectiveness of strakes as VIV suppressors and provide a basis of comparison to numerical simulations.
Resumo:
This paper reports the use of a non-destructive, continuous magnetic Barkhausen noise (CMBN) technique to investigate the size and thickness of volumetric defects, in a 1070 steel. The magnetic behavior of the used probe was analyzed by numerical simulation, using the finite element method (FEM). Results indicated that the presence of a ferrite coil core in the probe favors MBN emissions. The samples were scanned with different speeds and probe configurations to determine the effect of the flaw on the CMBN signal amplitude. A moving smooth window, based on a second-order statistical moment, was used for analyzing the time signal. The results show the technique`s good repeatability, and high capacity for detection of this type of defect. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The purpose of this article is to present a quantitative analysis of the human failure contribution in the collision and/or grounding of oil tankers, considering the recommendation of the ""Guidelines for Formal Safety Assessment"" of the International Maritime Organization. Initially, the employed methodology is presented, emphasizing the use of the technique for human error prediction to reach the desired objective. Later, this methodology is applied to a ship operating on the Brazilian coast and, thereafter, the procedure to isolate the human actions with the greatest potential to reduce the risk of an accident is described. Finally, the management and organizational factors presented in the ""International Safety Management Code"" are associated with these selected actions. Therefore, an operator will be able to decide where to work in order to obtain an effective reduction in the probability of accidents. Even though this study does not present a new methodology, it can be considered as a reference in the human reliability analysis for the maritime industry, which, in spite of having some guides for risk analysis, has few studies related to human reliability effectively applied to the sector.
Resumo:
We show that quantum feedback control can be used as a quantum-error-correction process for errors induced by a weak continuous measurement. In particular, when the error model is restricted to one, perfectly measured, error channel per physical qubit, quantum feedback can act to perfectly protect a stabilizer codespace. Using the stabilizer formalism we derive an explicit scheme, involving feedback and an additional constant Hamiltonian, to protect an (n-1)-qubit logical state encoded in n physical qubits. This works for both Poisson (jump) and white-noise (diffusion) measurement processes. Universal quantum computation is also possible in this scheme. As an example, we show that detected-spontaneous emission error correction with a driving Hamiltonian can greatly reduce the amount of redundancy required to protect a state from that which has been previously postulated [e.g., Alber , Phys. Rev. Lett. 86, 4402 (2001)].
Resumo:
This paper presents a method for estimating the posterior probability density of the cointegrating rank of a multivariate error correction model. A second contribution is the careful elicitation of the prior for the cointegrating vectors derived from a prior on the cointegrating space. This prior obtains naturally from treating the cointegrating space as the parameter of interest in inference and overcomes problems previously encountered in Bayesian cointegration analysis. Using this new prior and Laplace approximation, an estimator for the posterior probability of the rank is given. The approach performs well compared with information criteria in Monte Carlo experiments. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
Here, we examine morphological changes in cortical thickness of patients with Alzheimer`s disease (AD) using image analysis algorithms for brain structure segmentation and study automatic classification of AD patients using cortical and volumetric data. Cortical thickness of AD patients (n = 14) was measured using MRI cortical surface-based analysis and compared with healthy subjects (n = 20). Data was analyzed using an automated algorithm for tissue segmentation and classification. A Support Vector Machine (SVM) was applied over the volumetric measurements of subcortical and cortical structures to separate AD patients from controls. The group analysis showed cortical thickness reduction in the superior temporal lobe, parahippocampal gyrus, and enthorhinal cortex in both hemispheres. We also found cortical thinning in the isthmus of cingulate gyrus and middle temporal gyrus at the right hemisphere, as well as a reduction of the cortical mantle in areas previously shown to be associated with AD. We also confirmed that automatic classification algorithms (SVM) could be helpful to distinguish AD patients from healthy controls. Moreover, the same areas implicated in the pathogenesis of AD were the main parameters driving the classification algorithm. While the patient sample used in this study was relatively small, we expect that using a database of regional volumes derived from MRI scans of a large number of subjects will increase the SVM power of AD patient identification.
Resumo:
Analysis of a major multi-site epidemiologic study of heart disease has required estimation of the pairwise correlation of several measurements across sub-populations. Because the measurements from each sub-population were subject to sampling variability, the Pearson product moment estimator of these correlations produces biased estimates. This paper proposes a model that takes into account within and between sub-population variation, provides algorithms for obtaining maximum likelihood estimates of these correlations and discusses several approaches for obtaining interval estimates. (C) 1997 by John Wiley & Sons, Ltd.