899 resultados para Error correction methods
Resumo:
Background: Current methodology of gene expression analysis limits the possibilities of comparison between cells/tissues of organs in which cell size and/or number changes as a consequence of the study (e.g. starvation). A method relating the abundance of specific mRNA copies per cell may allow direct comparison or different organs and/or changing physiological conditions. Methods: With a number of selected genes, we analysed the relationship of the number of bases and the fluorescence recorded at a present level using cDNA standards. A lineal relationship was found between the final number of bases and the length of the transcript. The constants of this equation and those of the relationship between fluorescence and number of bases in cDNA were determined and a general equation linking the length of the transcript and the initial number of copies of mRNA was deduced for a given pre-established fluorescence setting. This allowed the calculation of the concentration of the corresponding mRNAs per g of tissue. The inclusion of tissue RNA and the DNA content per cell, allowed the calculation of the mRNA copies per cell. Results: The application of this procedure to six genes: Arbp, cyclophilin, ChREBP, T4 deiodinase 2, acetyl-CoA carboxylase 1 and IRS-1, in liver and retroperitoneal adipose tissue of food-restricted rats allowed precise measures of their changes irrespective of the shrinking of the tissue, the loss of cells or changes in cell size, factors that deeply complicate the comparison between changing tissue conditions. The percentage results obtained with the present methods were essentially the same obtained with the delta-delta procedure and with individual cDNA standard curve quantitative RT-PCR estimation. Conclusion: The method presented allows the comparison (i.e. as copies of mRNA per cell) between different genes and tissues, establishing the degree of abundance of the different molecular species tested.
Resumo:
Question: When multiple observers record the same spatial units of alpine vegetation, how much variation is there in the records and what are the consequences of this variation for monitoring schemes to detect change? Location: One test summit in Switzerland (Alps) and one test summit in Scotland (Cairngorm Mountains). Method: Eight observers used the GLORIA protocols for species composition and visual cover estimates in percent on large summit sections (>100 m2) and species composition and frequency in nested quadrats (1 m2). Results: The multiple records from the same spatial unit for species composition and species cover showed considerable variation in the two countries. Estimates of pseudoturnover of composition and coefficients of variation of cover estimates for vascular plant species in 1m x 1m quadrats showed less variation than in previously published reports whereas our results in larger sections were broadly in line with previous reports. In Scotland, estimates for bryophytes and lichens were more variable than for vascular plants. Conclusions: Statistical power calculations indicated that, unless large numbers of plots were used, changes in cover or frequency were only likely to be detected for abundant species (exceeding 10% cover) or if relative changes were large (50% or more). Lower variation could be reached with the point methods and with larger numbers of small plots. However, as summits often strongly differ from each other, supplementary summits cannot be considered as a way of increasing statistical power without introducing a supplementary component of variance into the analysis and hence the power calculations.
Resumo:
Diffusion-weighting in magnetic resonance imaging (MRI) increases the sensitivity to molecular Brownian motion, providing insight in the micro-environment of the underlying tissue types and structures. At the same time, the diffusion weighting renders the scans sensitive to other motion, including bulk patient motion. Typically, several image volumes are needed to extract diffusion information, inducing also inter-volume motion susceptibility. Bulk motion is more likely during long acquisitions, as they appear in diffusion tensor, diffusion spectrum and q-ball imaging. Image registration methods are successfully used to correct for bulk motion in other MRI time series, but their performance in diffusion-weighted MRI is limited since diffusion weighting introduces strong signal and contrast changes between serial image volumes. In this work, we combine the capability of free induction decay (FID) navigators, providing information on object motion, with image registration methodology to prospectively--or optionally retrospectively--correct for motion in diffusion imaging of the human brain. Eight healthy subjects were instructed to perform small-scale voluntary head motion during clinical diffusion tensor imaging acquisitions. The implemented motion detection based on FID navigator signals is processed in real-time and provided an excellent detection performance of voluntary motion patterns even at a sub-millimetre scale (sensitivity≥92%, specificity>98%). Motion detection triggered an additional image volume acquisition with b=0 s/mm2 which was subsequently co-registered to a reference volume. In the prospective correction scenario, the calculated motion-parameters were applied to perform a real-time update of the gradient coordinate system to correct for the head movement. Quantitative analysis revealed that the motion correction implementation is capable to correct head motion in diffusion-weighted MRI to a level comparable to scans without voluntary head motion. The results indicate the potential of this method to improve image quality in diffusion-weighted MRI, a concept that can also be applied when highest diffusion weightings are performed.
Resumo:
The multiscale finite-volume (MSFV) method has been derived to efficiently solve large problems with spatially varying coefficients. The fine-scale problem is subdivided into local problems that can be solved separately and are coupled by a global problem. This algorithm, in consequence, shares some characteristics with two-level domain decomposition (DD) methods. However, the MSFV algorithm is different in that it incorporates a flux reconstruction step, which delivers a fine-scale mass conservative flux field without the need for iterating. This is achieved by the use of two overlapping coarse grids. The recently introduced correction function allows for a consistent handling of source terms, which makes the MSFV method a flexible algorithm that is applicable to a wide spectrum of problems. It is demonstrated that the MSFV operator, used to compute an approximate pressure solution, can be equivalently constructed by writing the Schur complement with a tangential approximation of a single-cell overlapping grid and incorporation of appropriate coarse-scale mass-balance equations.
Resumo:
This paper investigates the use of ensemble of predictors in order to improve the performance of spatial prediction methods. Support vector regression (SVR), a popular method from the field of statistical machine learning, is used. Several instances of SVR are combined using different data sampling schemes (bagging and boosting). Bagging shows good performance, and proves to be more computationally efficient than training a single SVR model while reducing error. Boosting, however, does not improve results on this specific problem.
Resumo:
We evaluate the performance of different optimization techniques developed in the context of optical flow computation with different variational models. In particular, based on truncated Newton methods (TN) that have been an effective approach for large-scale unconstrained optimization, we de- velop the use of efficient multilevel schemes for computing the optical flow. More precisely, we evaluate the performance of a standard unidirectional mul- tilevel algorithm - called multiresolution optimization (MR/OPT), to a bidrec- tional multilevel algorithm - called full multigrid optimization (FMG/OPT). The FMG/OPT algorithm treats the coarse grid correction as an optimiza- tion search direction and eventually scales it using a line search. Experimental results on different image sequences using four models of optical flow com- putation show that the FMG/OPT algorithm outperforms both the TN and MR/OPT algorithms in terms of the computational work and the quality of the optical flow estimation.
Resumo:
PURPOSE: To review, retrospectively, the possible causes of sub- or intertrochanteric fractures after screw fixation of intracapsular fractures of the proximal femur. METHODS: Eighty-four patients with an intracapsular fracture of proximal femur were operated between 1995 and 1998 by using three cannulated 6.25 mm screws. The screws were inserted in a triangular configuration, one screw in the upper part of the femoral neck and two screws in the inferior part. Between 1999 and 2001, we use two screws proximally and one screw distally. RESULTS: In the first series, two patients died within one week after operation. Sixty-four fractures healed without problems. Four patients developed an atrophic non-union; avascular necrosis of the femoral head was found in 11 patients. Three patients (3.6%) suffered a sub- and/or intertrochanteric fracture after a mean postoperative time of 30 days, in one case without obvious trauma. In all three cases surgical revision was necessary. Between 1999 and 2001 we did not observe any fracture after screwing. CONCLUSION: Two screws in the inferior part of the femoral neck create a stress riser in the subtrochanteric region, potentially inducing a fracture in the weakened bone. For internal fixation for proximal intracapsular femoral fracture only one screw must be inserted in the inferior part of neck.
Resumo:
In the last five years, Deep Brain Stimulation (DBS) has become the most popular and effective surgical technique for the treatent of Parkinson's disease (PD). The Subthalamic Nucleus (STN) is the usual target involved when applying DBS. Unfortunately, the STN is in general not visible in common medical imaging modalities. Therefore, atlas-based segmentation is commonly considered to locate it in the images. In this paper, we propose a scheme that allows both, to perform a comparison between different registration algorithms and to evaluate their ability to locate the STN automatically. Using this scheme we can evaluate the expert variability against the error of the algorithms and we demonstrate that automatic STN location is possible and as accurate as the methods currently used.
Resumo:
Global positioning systems (GPS) offer a cost-effective and efficient method to input and update transportation data. The spatial location of objects provided by GPS is easily integrated into geographic information systems (GIS). The storage, manipulation, and analysis of spatial data are also relatively simple in a GIS. However, many data storage and reporting methods at transportation agencies rely on linear referencing methods (LRMs); consequently, GPS data must be able to link with linear referencing. Unfortunately, the two systems are fundamentally incompatible in the way data are collected, integrated, and manipulated. In order for the spatial data collected using GPS to be integrated into a linear referencing system or shared among LRMs, a number of issues need to be addressed. This report documents and evaluates several of those issues and offers recommendations. In order to evaluate the issues associated with integrating GPS data with a LRM, a pilot study was created. To perform the pilot study, point features, a linear datum, and a spatial representation of a LRM were created for six test roadway segments that were located within the boundaries of the pilot study conducted by the Iowa Department of Transportation linear referencing system project team. Various issues in integrating point features with a LRM or between LRMs are discussed and recommendations provided. The accuracy of the GPS is discussed, including issues such as point features mapping to the wrong segment. Another topic is the loss of spatial information that occurs when a three-dimensional or two-dimensional spatial point feature is converted to a one-dimensional representation on a LRM. Recommendations such as storing point features as spatial objects if necessary or preserving information such as coordinates and elevation are suggested. The lack of spatial accuracy characteristic of most cartography, on which LRM are often based, is another topic discussed. The associated issues include linear and horizontal offset error. The final topic discussed is some of the issues in transferring point feature data between LRMs.
Resumo:
OBJECTIVES: To test the validity of a simple, rapid, field-adapted, portable hand-held impedancemeter (HHI) for the estimation of lean body mass (LBM) and percentage body fat (%BF) in African women, and to develop specific predictive equations. DESIGN: Cross-sectional observational study. SETTINGS: Dakar, the capital city of Senegal, West Africa. SUBJECTS: A total sample of 146 women volunteered. Their mean age was of 31.0 y (s.d. 9.1), weight 60.9 kg (s.d. 13.1) and BMI 22.6 kg/m(2) (s.d. 4.5). METHODS: Body composition values estimated by HHI were compared to those measured by whole body densitometry performed by air displacement plethysmography (ADP). The specific density of LBM in black subjects was taken into account for the calculation of %BF from body density. RESULTS: : Estimations from HHI showed a large bias (mean difference) of 5.6 kg LBM (P<10(-4)) and -8.8 %BF (P<10(-4)) and errors (s.d. of the bias) of 2.6 kg LBM and 3.7 %BF. In order to correct for the bias, specific predictive equations were developed. With the HHI result as a single predictor, error values were of 1.9 kg LBM and 3.7 %BF in the prediction group (n=100), and of 2.2 kg LBM and 3.6 %BF in the cross-validation group (n=46). Addition of anthropometrical predictors was not necessary. CONCLUSIONS: The HHI analyser significantly overestimated LBM and underestimated %BF in African women. After correction for the bias, the body compartments could easily be estimated in African women by using the HHI result in an appropriate prediction equation with a good precision. It remains to be seen whether a combination of arm and leg impedancemetry in order to take into account lower limbs would further improve the prediction of body composition in Africans.
Resumo:
Dose kernel convolution (DK) methods have been proposed to speed up absorbed dose calculations in molecular radionuclide therapy. Our aim was to evaluate the impact of tissue density heterogeneities (TDH) on dosimetry when using a DK method and to propose a simple density-correction method. METHODS: This study has been conducted on 3 clinical cases: case 1, non-Hodgkin lymphoma treated with (131)I-tositumomab; case 2, a neuroendocrine tumor treatment simulated with (177)Lu-peptides; and case 3, hepatocellular carcinoma treated with (90)Y-microspheres. Absorbed dose calculations were performed using a direct Monte Carlo approach accounting for TDH (3D-RD), and a DK approach (VoxelDose, or VD). For each individual voxel, the VD absorbed dose, D(VD), calculated assuming uniform density, was corrected for density, giving D(VDd). The average 3D-RD absorbed dose values, D(3DRD), were compared with D(VD) and D(VDd), using the relative difference Δ(VD/3DRD). At the voxel level, density-binned Δ(VD/3DRD) and Δ(VDd/3DRD) were plotted against ρ and fitted with a linear regression. RESULTS: The D(VD) calculations showed a good agreement with D(3DRD). Δ(VD/3DRD) was less than 3.5%, except for the tumor of case 1 (5.9%) and the renal cortex of case 2 (5.6%). At the voxel level, the Δ(VD/3DRD) range was 0%-14% for cases 1 and 2, and -3% to 7% for case 3. All 3 cases showed a linear relationship between voxel bin-averaged Δ(VD/3DRD) and density, ρ: case 1 (Δ = -0.56ρ + 0.62, R(2) = 0.93), case 2 (Δ = -0.91ρ + 0.96, R(2) = 0.99), and case 3 (Δ = -0.69ρ + 0.72, R(2) = 0.91). The density correction improved the agreement of the DK method with the Monte Carlo approach (Δ(VDd/3DRD) < 1.1%), but with a lesser extent for the tumor of case 1 (3.1%). At the voxel level, the Δ(VDd/3DRD) range decreased for the 3 clinical cases (case 1, -1% to 4%; case 2, -0.5% to 1.5%, and -1.5% to 2%). No more linear regression existed for cases 2 and 3, contrary to case 1 (Δ = 0.41ρ - 0.38, R(2) = 0.88) although the slope in case 1 was less pronounced. CONCLUSION: This study shows a small influence of TDH in the abdominal region for 3 representative clinical cases. A simple density-correction method was proposed and improved the comparison in the absorbed dose calculations when using our voxel S value implementation.
Resumo:
Voltage fluctuations caused by parasitic impedances in the power supply rails of modern ICs are a major concern in nowadays ICs. The voltage fluctuations are spread out to the diverse nodes of the internal sections causing two effects: a degradation of performances mainly impacting gate delays anda noisy contamination of the quiescent levels of the logic that drives the node. Both effects are presented together, in thispaper, showing than both are a cause of errors in modern and future digital circuits. The paper groups both error mechanismsand shows how the global error rate is related with the voltage deviation and the period of the clock of the digital system.
Resumo:
This work is devoted to the problem of reconstructing the basis weight structure at paper web with black{box techniques. The data that is analyzed comes from a real paper machine and is collected by an o®-line scanner. The principal mathematical tool used in this work is Autoregressive Moving Average (ARMA) modelling. When coupled with the Discrete Fourier Transform (DFT), it gives a very flexible and interesting tool for analyzing properties of the paper web. Both ARMA and DFT are independently used to represent the given signal in a simplified version of our algorithm, but the final goal is to combine the two together. Ljung-Box Q-statistic lack-of-fit test combined with the Root Mean Squared Error coefficient gives a tool to separate significant signals from noise.
Resumo:
Thedirect torque control (DTC) has become an accepted vector control method besidethe current vector control. The DTC was first applied to asynchronous machines,and has later been applied also to synchronous machines. This thesis analyses the application of the DTC to permanent magnet synchronous machines (PMSM). In order to take the full advantage of the DTC, the PMSM has to be properly dimensioned. Therefore the effect of the motor parameters is analysed taking the control principle into account. Based on the analysis, a parameter selection procedure is presented. The analysis and the selection procedure utilize nonlinear optimization methods. The key element of a direct torque controlled drive is the estimation of the stator flux linkage. Different estimation methods - a combination of current and voltage models and improved integration methods - are analysed. The effect of an incorrect measured rotor angle in the current model is analysed andan error detection and compensation method is presented. The dynamic performance of an earlier presented sensorless flux estimation method is made better by improving the dynamic performance of the low-pass filter used and by adapting the correction of the flux linkage to torque changes. A method for the estimation ofthe initial angle of the rotor is presented. The method is based on measuring the inductance of the machine in several directions and fitting the measurements into a model. The model is nonlinear with respect to the rotor angle and therefore a nonlinear least squares optimization method is needed in the procedure. A commonly used current vector control scheme is the minimum current control. In the DTC the stator flux linkage reference is usually kept constant. Achieving the minimum current requires the control of the reference. An on-line method to perform the minimization of the current by controlling the stator flux linkage reference is presented. Also, the control of the reference above the base speed is considered. A new estimation flux linkage is introduced for the estimation of the parameters of the machine model. In order to utilize the flux linkage estimates in off-line parameter estimation, the integration methods are improved. An adaptive correction is used in the same way as in the estimation of the controller stator flux linkage. The presented parameter estimation methods are then used in aself-commissioning scheme. The proposed methods are tested with a laboratory drive, which consists of a commercial inverter hardware with a modified software and several prototype PMSMs.
Resumo:
The market place of the twenty-first century will demand that manufacturing assumes a crucial role in a new competitive field. Two potential resources in the area of manufacturing are advanced manufacturing technology (AMT) and empowered employees. Surveys in Finland have shown the need to invest in the new AMT in the Finnish sheet metal industry in the 1990's. In this run the focus has been on hard technology and less attention is paid to the utilization of human resources. In manymanufacturing companies an appreciable portion of the profit within reach is wasted due to poor quality of planning and workmanship. The production flow production error distribution of the sheet metal part based constructions is inspectedin this thesis. The objective of the thesis is to analyze the origins of production errors in the production flow of sheet metal based constructions. Also the employee empowerment is investigated in theory and the meaning of the employee empowerment in reducing the overall production error amount is discussed in this thesis. This study is most relevant to the sheet metal part fabricating industrywhich produces sheet metal part based constructions for electronics and telecommunication industry. This study concentrates on the manufacturing function of a company and is based on a field study carried out in five Finnish case factories. In each studied case factory the most delicate work phases for production errors were detected. It can be assumed that most of the production errors are caused in manually operated work phases and in mass production work phases. However, no common theme in collected production error data for production error distribution in the production flow can be found. Most important finding was still that most of the production errors in each case factory studied belong to the 'human activity based errors-category'. This result indicates that most of the problemsin the production flow are related to employees or work organization. Development activities must therefore be focused to the development of employee skills orto the development of work organization. Employee empowerment gives the right tools and methods to achieve this.