989 resultados para correction methods
Resumo:
This paper presents a method for estimating the posterior probability density of the cointegrating rank of a multivariate error correction model. A second contribution is the careful elicitation of the prior for the cointegrating vectors derived from a prior on the cointegrating space. This prior obtains naturally from treating the cointegrating space as the parameter of interest in inference and overcomes problems previously encountered in Bayesian cointegration analysis. Using this new prior and Laplace approximation, an estimator for the posterior probability of the rank is given. The approach performs well compared with information criteria in Monte Carlo experiments. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
In this work a new approach for designing planar gradient coils is outlined for the use in an existing MRI apparatus. A technique that allows for gradient field corrections inside the diameter-sensitive volume is deliberated. These corrections are brought about by making changes to the wire paths that constitute the coil windings, and hence, is called the path correction method. The existing well-known target held method is used to gauge the performance of a typical gradient coil. The gradient coil design methodology is demonstrated for planar openable gradient coils that can be inserted into an existing MRI apparatus. The path corrected gradient coil is compared to the coil obtained using the target field method. It is shown that using a wire path correction with optimized variables, winding patterns that can deliver high magnetic gradient field strengths and large imaging regions can be obtained.
Resumo:
Background: Treatment of excessive gingival display usually involves procedures such as Le Fort impaction or maxillary gingivectomies. The authors propose an alternative technique that reduces the muscular function of the elevator of the upper lip muscle and repositioning of the upper lip. Methods: Fourteen female patients with excessive gingival exposure were operated on between February of 2008 and March of 2009. They were filmed before and at least 6 months after the procedure. They were asked to perform their fullest smile, and the maximum gingival exposures were measured and analyzed using ImageJ software. Patients were operated on under local anesthesia. Their gingival mucosa was freed from the maxilla using a periosteum elevator. Skin and subcutaneous tissue were dissected bluntly from the underlying musculature of the upper lip. A frenuloplasty was performed to lengthen the upper lip. Both levator labii superioris muscles were dissected and divided. Results: The postoperative course was uneventful in all of the patients. The mean gingival exposure before surgery was 5.22 +/- 1.48 mm; 6 months after surgery, it was 1.91 +/- 1.50 mm. The mean gingival exposure reduction was 3.31 +/- 1.05 mm (p < 0.001), ranging from 1.59 to 4.83 mm. Conclusion: This study shows that the proposed technique was efficient in reducing the amount of exposed gum during smile in all patients in this series. (Plast. Reconstr. Surg. 126: 1014, 2010.)
Resumo:
BACKGROUND AND PURPOSE: Functional brain variability has been scarcely investigated in cognitively healthy elderly subjects, and it is currently debated whether previous findings of regional metabolic variability are artifacts associated with brain atrophy. The primary purpose of this study was to test whether there is regional cerebral age-related hypometabolism specifically in later stages of life. MATERIALS AND METHODS: MR imaging and FDG-PET data were acquired from 55 cognitively healthy elderly subjects, and voxel-based linear correlations between age and GM volume or regional cerebral metabolism were conducted by using SPM5 in images with and without correction for PVE. To investigate sex-specific differences in the pattern of brain aging, we repeated the above voxelwise calculations after dividing our sample by sex. RESULTS: Our analysis revealed 2 large clusters of age-related metabolic decrease in the overall sample, 1 in the left orbitofrontal cortex and the other in the right temporolimbic region, encompassing the hippocampus, the parahippocampal gyrus, and the amygdala. The division of our sample by sex revealed significant sex-specific age-related metabolic decrease in the left temporolimbic region of men and in the left dorsolateral frontal cortex of women. When we applied atrophy correction to our PET data, none of the above-mentioned correlations remained significant. CONCLUSIONS: Our findings suggest that age-related functional brain variability in cognitively healthy elderly individuals is largely secondary to the degree of regional brain atrophy, and the findings provide support to the notion that appropriate PVE correction is a key tool in neuroimaging investigations.
Resumo:
Pectus carinatum (PC) is a chest deformity caused by a disproportionate growth of the costal cartilages compared to the bony thoracic skeleton, pulling the sternum towards, which leads to its protrusion. There has been a growing interest on using the ‘reversed Nuss’ technique as minimally invasive procedure for PC surgical correction. A corrective bar is introduced between the skin and the thoracic cage and positioned on top of the sternum highest protrusion area for continuous pressure. Then, it is fixed to the ribs and kept implanted for about 2–3 years. The purpose of this work was to (a) assess the stresses distribution on the thoracic cage that arise from the procedure, and (b) investigate the impact of different positioning of the corrective bar along the sternum. The higher stresses were generated on the 4th, 5th and 6th ribs backend, supporting the hypothesis of pectus deformities correction-induced scoliosis. The different bar positioning originated different stresses on the ribs’ backend. The bar position that led to lower stresses generated on the ribs backend was the one that also led to the smallest sternum displacement. However, this may be preferred, as the risk of induced scoliosis is lowered.
Resumo:
ABSTRACTAiming to compare three different methods for the determination of organic carbon (OC) in the soil and fractions of humic substances, seventeen Brazilian soil samples of different classes and textures were evaluated. Amounts of OC in the soil samples and the humic fractions were measured by the dichromate-oxidation method, with and without external heating in a digestion block at 130 °C for 30 min; by the loss-on-ignition method at 450 °C during 5 h and at 600 °C during 6 h; and by the dry combustion method. Dry combustion was used as reference in order to measure the efficiency of the other methods. Soil OC measured by the dichromate-oxidation method with external heating had the highest efficiency and the best results comparing to the reference method. When external heating was not used, the mean recovery efficiency dropped to 71%. The amount of OC was overestimated by the loss-on-ignition methods. Regression equations obtained between total OC contents of the reference method and those of the other methods showed relatively good adjustment, but all intercepts were different from zero (p < 0.01), which suggests that more accuracy can be obtained using not one single correction factor, but considering also the intercept. The Walkley-Black method underestimated the OC contents of the humic fractions, which was associated with the partial oxidation of the humin fraction. Better results were obtained when external heating was used. For the organic matter fractions, the OC in the humic and fulvic acid fractions can be determined without external heating if the reference method is not available, but the humin fraction requires the external heating.
Resumo:
ABSTRACT In areas cultivated under no-tillage system, the availability of phosphorus (P) can be raised by means of the gradual corrective fertilization, applying phosphorus into sowing furrows at doses higher than those required by the crops. The objective of this work was to establish the amount of P to be applied in soybean crop to increase content of P to pre-established values at the depth of 0.0 to 0.10 m. An experiment was carried out on a clayey Haplorthox soil with a randomized block experimental design distributed in split-split plot, with four replications. Two soybean crop systems (single or intercropped with Panicum maximum Jaca cv. Aruana) were evaluated in the plots. In addition, it was evaluated four P levels (0, 60, 120 and 180 kg ha-1 P2O5) applied in the first year in the split plots; and four P levels (0, 30, 60 and 90 kg ha-1 P2O5) applied in the two subsequent crops in the split-split plot. Contents of P were extracted by Mehlich-1 and Anion Exchange Resin methods from soil samples collected in the split-split plot. It was found that it is necessary to apply 19.4 or 11.1 kg ha-1 of P2O5, via triple superphosphate as source, to increase 1 mg dm-3 of P extracted by Mehlich-1 or Resin, respectively, in the 0.0 to 0.10 m layer of depth. The soil drain P character decreases as the amount of this nutrient supplied in the previous crops is increased.
Resumo:
A previously developed model is used to numerically simulate real clinical cases of the surgical correction of scoliosis. This model consists of one-dimensional finite elements with spatial deformation in which (i) the column is represented by its axis; (ii) the vertebrae are assumed to be rigid; and (iii) the deformability of the column is concentrated in springs that connect the successive rigid elements. The metallic rods used for the surgical correction are modeled by beam elements with linear elastic behavior. To obtain the forces at the connections between the metallic rods and the vertebrae geometrically, non-linear finite element analyses are performed. The tightening sequence determines the magnitude of the forces applied to the patient column, and it is desirable to keep those forces as small as possible. In this study, a Genetic Algorithm optimization is applied to this model in order to determine the sequence that minimizes the corrective forces applied during the surgery. This amounts to find the optimal permutation of integers 1, ... , n, n being the number of vertebrae involved. As such, we are faced with a combinatorial optimization problem isomorph to the Traveling Salesman Problem. The fitness evaluation requires one computing intensive Finite Element Analysis per candidate solution and, thus, a parallel implementation of the Genetic Algorithm is developed.
Resumo:
Introduction Myocardial Perfusion Imaging (MPI) is a very important tool in the assessment of Coronary Artery Disease ( CAD ) patient s and worldwide data demonstrate an increasingly wider use and clinical acceptance. Nevertheless, it is a complex process and it is quite vulnerable concerning the amount and type of possible artefacts, some of them affecting seriously the overall quality and the clinical utility of the obtained data. One of the most in convenient artefacts , but relatively frequent ( 20% of the cases ) , is relate d with patient motion during image acquisition . Mostly, in those situations, specific data is evaluated and a decisi on is made between A) accept the results as they are , consider ing that t he “noise” so introduced does not affect too seriously the final clinical information, or B) to repeat the acquisition process . Another possib ility could be to use the “ Motion Correcti on Software” provided within the software package included in any actual gamma camera. The aim of this study is to compare the quality of the final images , obtained after the application of motion correction software and after the repetition of image acqui sition. Material and Methods Thirty cases of MPI affected by Motion Artefacts and repeated , were used. A group of three, independent (blinded for the differences of origin) expert Nuclear Medicine Clinicians had been invited to evaluate the 30 sets of thre e images - one set for each patient - being ( A) original image , motion uncorrected , (B) original image, motion corrected, and (C) second acquisition image, without motion . The results so obtained were statistically analysed . Results and Conclusion Results obtained demonstrate that the use of the Motion Correction Software is useful essentiall y if the amplitude of movement is not too important (with this specific quantification found hard to define precisely , due to discrepancies between clinicians and other factors , namely between one to another brand); when that is not the case and the amplitude of movement is too important , the n the percentage of agreement between clinicians is much higher and the repetition of the examination is unanimously considered ind ispensable.
Resumo:
Introduction: The quantification of th e differential renal function in adults can be difficult due to many factors - on e of the se is the variances in kidney depth and the attenuation related with all the tissue s between the kidney and the camera. Some authors refer that t he lower attenuation i n p ediatric patients makes unnecessary the use of attenuation correction algorithms. This study will com pare the values of differential renal function obtained with and with out attenuation correction techniques . Material and Methods: Images from a group consisting of 15 individuals (aged 3 years +/ - 2) were used and two attenuation correction method s were applied – Tonnesen correction factors and the geometric mean method . The mean time of acquisition (time post 99m Tc - DMSA administration) was 3.5 hours +/ - 0.8h. Results: T he absence of any method of attenuation correction apparently seems to lead to consistent values that seem to correlate well with the ones obtained with the incorporation of methods of attenuation correction . The differences found between the values obtained with and without attenuation correction were not significant. Conclusion: T he decision of not doing any kind of attenuation correction method can apparently be justified by the minor differences verified on the relative kidney uptake values. Nevertheless, if it is recognized that there is a need for a really accurate value of the relative kidney uptake, then an attenuation correction method should be used.
Resumo:
Microarray allow to monitoring simultaneously thousands of genes, where the abundance of the transcripts under a same experimental condition at the same time can be quantified. Among various available array technologies, double channel cDNA microarray experiments have arisen in numerous technical protocols associated to genomic studies, which is the focus of this work. Microarray experiments involve many steps and each one can affect the quality of raw data. Background correction and normalization are preprocessing techniques to clean and correct the raw data when undesirable fluctuations arise from technical factors. Several recent studies showed that there is no preprocessing strategy that outperforms others in all circumstances and thus it seems difficult to provide general recommendations. In this work, it is proposed to use exploratory techniques to visualize the effects of preprocessing methods on statistical analysis of cancer two-channel microarray data sets, where the cancer types (classes) are known. For selecting differential expressed genes the arrow plot was used and the graph of profiles resultant from the correspondence analysis for visualizing the results. It was used 6 background methods and 6 normalization methods, performing 36 pre-processing methods and it was analyzed in a published cDNA microarray database (Liver) available at http://genome-www5.stanford.edu/ which microarrays were already classified by cancer type. All statistical analyses were performed using the R statistical software.
Resumo:
Trabalho apresentado no âmbito do European Master in Computational Logics, como requisito parcial para obtenção do grau de Mestre em Computational Logics
Resumo:
Dissertation to Obtain the Degree of Master in Biomedical Engineering
Resumo:
Objective: To compare measurements of the upper arm cross-sectional areas (total arm area,arm muscle area, and arm fat area of healthy neonates) as calculated using anthropometry with the values obtained by ultrasonography. Materials and methods: This study was performed on 60 consecutively born healthy neonates: gestational age (mean6SD) 39.661.2 weeks, birth weight 3287.16307.7 g, 27 males (45%) and 33 females (55%). Mid-arm circumference and tricipital skinfold thickness measurements were taken on the left upper mid-arm according to the conventional anthropometric method to calculate total arm area, arm muscle area and arm fat area. The ultrasound evaluation was performed at the same arm location using a Toshiba sonolayer SSA-250AÒ, which allows the calculation of the total arm area, arm muscle area and arm fat area by the number of pixels enclosed in the plotted areas. Statistical analysis: whenever appropriate, parametric and non-parametric tests were used in order to compare measurements of paired samples and of groups of samples. Results: No significant differences between males and females were found in any evaluated measurements, estimated either by anthropometry or by ultrasound. Also the median of total arm area did not differ significantly with either method (P50.337). Although there is evidence of concordance of the total arm area measurements (r50.68, 95% CI: 0.55–0.77) the two methods of measurement differed for arm muscle area and arm fat area. The estimated median of measurements by ultrasound for arm muscle area were significantly lower than those estimated by the anthropometric method, which differed by as much as 111% (P,0.001). The estimated median ultrasound measurement of the arm fat was higher than the anthropometric arm fat area by as much as 31% (P,0.001). Conclusion: Compared with ultrasound measurements using skinfold measurements and mid-arm circumference without further correction may lead to overestimation of the cross-sectional area of muscle and underestimation of the cross-sectional fat area. The correlation between the two methods could be interpreted as an indication for further search of correction factors in the equations.
Resumo:
OBJECTIVE: To evaluate whether left ventricular end-systolic (ESD) diameters £ 51mm in patients (pt) with severe chronic mitral regurgitation (MR) are predictors of a poor prognosis after mitral valve surgery (MVS). METHODS: Eleven pt (aged 36±13 years) were studied in the preoperative period (pre), median of 36 days; in the early postoperative period (post1), median of 9 days; and in the late postoperative period (post2), mean of 38.5±37.6 months. Clinical and echocardiographic data were gathered from each pt with MR and systolic diameter ³51mm (mean = 57±4mm) to evaluate the result of MVS. Ten patients were in NYHA Class III/IV. RESULTS: All but 2 pt improved in functional class. Two pt died from heart failure and infectious endocarditis 14 and 11 months, respectively, after valve replacement. According to ejection fraction (EF) in post2, we identified 2 groups: group 1 (n=6), whose EF decreased in post1, but increased in post2 (p=0.01) and group 2 (n=5), whose EF decreased progressively from post1 to post2 (p=0.10). All pt with symptoms lasting £ 48 months had improvement in EF in post2 (p=0.01). CONCLUSION: ESD ³51mm are not always associated with a poor prognosis after MVS in patients with MR. Symptoms lasting up to 48 months are associated with improvement in left ventricular function.