973 resultados para direct-subtracting method
Resumo:
Despite the considerable environmental importance of mercury (Hg), given its high toxicity and ability to contaminate large areas via atmospheric deposition, little is known about its activity in soils, especially tropical soils, in comparison with other heavy metals. This lack of information about Hg arises because analytical methods for determination of Hg are more laborious and expensive compared to methods for other heavy metals. The situation is even more precarious regarding speciation of Hg in soils since sequential extraction methods are also inefficient for this metal. The aim of this paper is to present a technique of thermal desorption associated with atomic absorption spectrometry, TDAAS, as an efficient tool for quantitative determination of Hg in soils. The method consists of the release of Hg by heating, followed by its quantification by atomic absorption spectrometry. It was developed by constructing calibration curves in different soil samples based on increasing volumes of standard Hg2+ solutions. Performance, accuracy, precision, and quantification and detection limit parameters were evaluated. No matrix interference was detected. Certified reference samples and comparison with a Direct Mercury Analyzer, DMA (another highly recognized technique), were used in validation of the method, which proved to be accurate and precise.
Resumo:
To study the stress-induced effects caused by wounding under a new perspective, a metabolomic strategy based on HPLC-MS has been devised for the model plant Arabidopsis thaliana. To detect induced metabolites and precisely localise these compounds among the numerous constitutive metabolites, HPLC-MS analyses were performed in a two-step strategy. In a first step, rapid direct TOF-MS measurements of the crude leaf extract were performed with a ballistic gradient on a short LC-column. The HPLC-MS data were investigated by multivariate analysis as total mass spectra (TMS). Principal components analysis (PCA) and hierarchical cluster analysis (HCA) on principal coordinates were combined for data treatment. PCA and HCA demonstrated a clear clustering of plant specimens selecting the highest discriminating ions given by the complete data analysis, leading to the specific detection of discrete-induced ions (m/z values). Furthermore, pool constitution with plants of homogeneous behaviour was achieved for confirmatory analysis. In this second step, long high-resolution LC profilings on an UPLC-TOF-MS system were used on pooled samples. This allowed to precisely localise the putative biological marker induced by wounding and by specific extraction of accurate m/z values detected in the screening procedure with the TMS spectra.
Resumo:
PURPOSE: To compare different techniques for positive contrast imaging of susceptibility markers with MRI for three-dimensional visualization. As several different techniques have been reported, the choice of the suitable method depends on its properties with regard to the amount of positive contrast and the desired background suppression, as well as other imaging constraints needed for a specific application. MATERIALS AND METHODS: Six different positive contrast techniques are investigated for their ability to image at 3 Tesla a single susceptibility marker in vitro. The white marker method (WM), susceptibility gradient mapping (SGM), inversion recovery with on-resonant water suppression (IRON), frequency selective excitation (FSX), fast low flip-angle positive contrast SSFP (FLAPS), and iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL) were implemented and investigated. RESULTS: The different methods were compared with respect to the volume of positive contrast, the product of volume and signal intensity, imaging time, and the level of background suppression. Quantitative results are provided, and strengths and weaknesses of the different approaches are discussed. CONCLUSION: The appropriate choice of positive contrast imaging technique depends on the desired level of background suppression, acquisition speed, and robustness against artifacts, for which in vitro comparative data are now available.
Resumo:
Images of myocardial strain can be used to diagnose heart disease, plan and monitor treatment, and to learn about cardiac structure and function. Three-dimensional (3D) strain is typically quantified using many magnetic resonance (MR) images obtained in two or three orthogonal planes. Problems with this approach include long scan times, image misregistration, and through-plane motion. This article presents a novel method for calculating cardiac 3D strain using a stack of two or more images acquired in only one orientation. The zHARP pulse sequence encodes in-plane motion using MR tagging and out-of-plane motion using phase encoding, and has been previously shown to be capable of computing 3D displacement within a single image plane. Here, data from two adjacent image planes are combined to yield a 3D strain tensor at each pixel; stacks of zHARP images can be used to derive stacked arrays of 3D strain tensors without imaging multiple orientations and without numerical interpolation. The performance and accuracy of the method is demonstrated in vitro on a phantom and in vivo in four healthy adult human subjects.
Resumo:
The aim of this work is to present a new concept, called on-line desorption of dried blood spots (on-line DBS), allowing the direct analysis of a dried blood spot coupled to liquid chromatography mass spectrometry device (LC/MS). The system is based on an inox cell which can receive a blood sample (10 microL) previously spotted on a filter paper. The cell is then integrated into LC/MS system where the analytes are desorbed out of the paper towards a column switching system ensuring the purification and separation of the compounds before their detection on a single quadrupole MS coupled to atmospheric pressure chemical ionisation (APCI) source. The described procedure implies that no pretreatment is necessary in spite the analysis is based on whole blood sample. To ensure the applicability of the concept, saquinavir, imipramine, and verapamil were chosen. Despite the use of a small sampling volume and a single quadrupole detector, on-line DBS allowed the analyses of these three compounds over their therapeutic concentrations from 50 to 500 ng/mL for imipramine and verapamil and from 100 to 1000 ng/mL for saquinavir. Moreover, the method showed good repeatability with relative standard deviation (RSD) lower than 15% based on two levels of concentration (low and high). Function responses were found to be linear over the therapeutic concentration for each compound and were used to determine the concentrations of real patient samples for saquinavir. Comparison of the founded values with those of a validated method used routinely in a reference laboratory showed a good correlation between the two methods. Moreover, good selectivity was observed ensuring that no endogenous or chemical components interfered with the quantitation of the analytes. This work demonstrates the feasibility and applicability of the on-line DBS procedure for bioanalysis.
Resumo:
BACKGROUND: The activity of the renin-angiotensin system is usually evaluated as plasma renin activity (PRA, ngAI/ml per h) but the reproducibility of this enzymatic assay is notoriously scarce. We compared the inter and intralaboratory reproducibilities of PRA with those of a new automated chemiluminescent assay, which allows the direct quantification of immunoreactive renin [chemiluminescent immunoreactive renin (CLIR), microU/ml]. METHODS: Aliquots from six pool plasmas of patients with very low to very high PRA levels were measured in 12 centres with both the enzymatic and the direct assays. The same methods were applied to three control plasma preparations with known renin content. RESULTS: In pool plasmas, mean PRA values ranged from 0.14 +/- 0.08 to 18.9 +/- 4.1 ngAI/ml per h, whereas those of CLIR ranged from 4.2 +/- 1.7 to 436 +/- 47 microU/ml. In control plasmas, mean values of PRA and of CLIR were always within the expected range. Overall, there was a significant correlation between the two methods (r = 0.73, P < 0.01). Similar correlations were found in plasmas subdivided in those with low, intermediate and high PRA. However, the coefficients of variation among laboratories found for PRA were always higher than those of CLIR, ranging from 59.4 to 17.1% for PRA, and from 41.0 to 10.7% for CLIR (P < 0.01). Also, the mean intralaboratory variability was higher for PRA than for CLIR, being respectively, 8.5 and 4.5% (P < 0.01). CONCLUSION: The measurement of renin with the chemiluminescent method is a reliable alternative to PRA, having the advantage of a superior inter and intralaboratory reproducibility.
Resumo:
Ethyl glucuronide (EtG) is a minor and direct metabolite of ethanol. EtG is incorporated into the growing hair allowing retrospective investigation of chronic alcohol abuse. In this study, we report the development and the validation of a method using gas chromatography-negative chemical ionization tandem mass spectrometry (GC-NCI-MS/MS) for the quantification of EtG in hair. EtG was extracted from about 30 mg of hair by aqueous incubation and purified by solid-phase extraction (SPE) using mixed mode extraction cartridges followed by derivation with perfluoropentanoic anhydride (PFPA). The analysis was performed in the selected reaction monitoring (SRM) mode using the transitions m/z 347-->163 (for the quantification) and m/z 347-->119 (for the identification) for EtG, and m/z 352-->163 for EtG-d(5) used as internal standard. For validation, we prepared quality controls (QC) using hair samples taken post mortem from 2 subjects with a known history of alcoholism. These samples were confirmed by a proficiency test with 7 participating laboratories. The assay linearity of EtG was confirmed over the range from 8.4 to 259.4 pg/mg hair, with a coefficient of determination (r(2)) above 0.999. The limit of detection (LOD) was estimated with 3.0 pg/mg. The lower limit of quantification (LLOQ) of the method was fixed at 8.4 pg/mg. Repeatability and intermediate precision (relative standard deviation, RSD%), tested at 4 QC levels, were less than 13.2%. The analytical method was applied to several hair samples obtained from autopsy cases with a history of alcoholism and/or lesions caused by alcohol. EtG concentrations in hair ranged from 60 to 820 pg/mg hair.
Resumo:
The flow of two immiscible fluids through a porous medium depends on the complex interplay between gravity, capillarity, and viscous forces. The interaction between these forces and the geometry of the medium gives rise to a variety of complex flow regimes that are difficult to describe using continuum models. Although a number of pore-scale models have been employed, a careful investigation of the macroscopic effects of pore-scale processes requires methods based on conservation principles in order to reduce the number of modeling assumptions. In this work we perform direct numerical simulations of drainage by solving Navier-Stokes equations in the pore space and employing the Volume Of Fluid (VOF) method to track the evolution of the fluid-fluid interface. After demonstrating that the method is able to deal with large viscosity contrasts and model the transition from stable flow to viscous fingering, we focus on the macroscopic capillary pressure and we compare different definitions of this quantity under quasi-static and dynamic conditions. We show that the difference between the intrinsic phase-average pressures, which is commonly used as definition of Darcy-scale capillary pressure, is subject to several limitations and it is not accurate in presence of viscous effects or trapping. In contrast, a definition based on the variation of the total surface energy provides an accurate estimate of the macroscopic capillary pressure. This definition, which links the capillary pressure to its physical origin, allows a better separation of viscous effects and does not depend on the presence of trapped fluid clusters.
Resumo:
The objective of this work was to genotype the single nucleotide polymorphism (SNP) A2959G (AF159246) of bovine CAST gene by PCR-RFLP technique, and to report its use for the first time. For this, 147 Bos indicus and Bos taurus x Bos indicus animals were genotyped. The accuracy of the method was confirmed through the direct sequencing of PCR products of nine individuals. The lowest frequency of the meat tenderness favorable allele (A) in Bos indicus was confirmed. The use of PCR-RFLP for the genotyping of the bovine CAST gene SNP was shown to be robust and inexpensive, which will greatly facilitate its analysis by laboratories with basic structure.
Resumo:
The objective of this work was to estimate the genetic parameters, genotypic and phenotypic correlations, and direct and indirect genetic gains among and within rubber tree (Hevea brasiliensis) progenies. The experiment was set up at the Municipality of Jaú, SP, Brazil. A randomized complete block design was used, with 22 treatments (progenies), 6 replicates, and 10 plants per plot at a spacing of 3x3 m. Three‑year‑old progenies were assessed for girth, rubber yield, and bark thickness by direct and indirect gains and genotypic correlations. The number of latex vessel rings showed the best correlations, correlating positively and significantly with girth and bark thickness. Selection gains among progenies were greater than within progeny for all the variables analyzed. Total gains obtained were high, especially for girth increase and rubber yield, which were 93.38 and 105.95%, respectively. Young progeny selection can maximize the expected genetic gains, reducing the rubber tree selection cycle.
Resumo:
Dose kernel convolution (DK) methods have been proposed to speed up absorbed dose calculations in molecular radionuclide therapy. Our aim was to evaluate the impact of tissue density heterogeneities (TDH) on dosimetry when using a DK method and to propose a simple density-correction method. METHODS: This study has been conducted on 3 clinical cases: case 1, non-Hodgkin lymphoma treated with (131)I-tositumomab; case 2, a neuroendocrine tumor treatment simulated with (177)Lu-peptides; and case 3, hepatocellular carcinoma treated with (90)Y-microspheres. Absorbed dose calculations were performed using a direct Monte Carlo approach accounting for TDH (3D-RD), and a DK approach (VoxelDose, or VD). For each individual voxel, the VD absorbed dose, D(VD), calculated assuming uniform density, was corrected for density, giving D(VDd). The average 3D-RD absorbed dose values, D(3DRD), were compared with D(VD) and D(VDd), using the relative difference Δ(VD/3DRD). At the voxel level, density-binned Δ(VD/3DRD) and Δ(VDd/3DRD) were plotted against ρ and fitted with a linear regression. RESULTS: The D(VD) calculations showed a good agreement with D(3DRD). Δ(VD/3DRD) was less than 3.5%, except for the tumor of case 1 (5.9%) and the renal cortex of case 2 (5.6%). At the voxel level, the Δ(VD/3DRD) range was 0%-14% for cases 1 and 2, and -3% to 7% for case 3. All 3 cases showed a linear relationship between voxel bin-averaged Δ(VD/3DRD) and density, ρ: case 1 (Δ = -0.56ρ + 0.62, R(2) = 0.93), case 2 (Δ = -0.91ρ + 0.96, R(2) = 0.99), and case 3 (Δ = -0.69ρ + 0.72, R(2) = 0.91). The density correction improved the agreement of the DK method with the Monte Carlo approach (Δ(VDd/3DRD) < 1.1%), but with a lesser extent for the tumor of case 1 (3.1%). At the voxel level, the Δ(VDd/3DRD) range decreased for the 3 clinical cases (case 1, -1% to 4%; case 2, -0.5% to 1.5%, and -1.5% to 2%). No more linear regression existed for cases 2 and 3, contrary to case 1 (Δ = 0.41ρ - 0.38, R(2) = 0.88) although the slope in case 1 was less pronounced. CONCLUSION: This study shows a small influence of TDH in the abdominal region for 3 representative clinical cases. A simple density-correction method was proposed and improved the comparison in the absorbed dose calculations when using our voxel S value implementation.
Resumo:
Knowledge of the pathological diagnosis before deciding the best strategy for treating parasellar lesions is of prime importance, due to the relative high morbidity and side-effects of open direct approaches to this region, known to be rich in important vasculo-nervous structures. When imaging is not evocative enough to ascertain an accurate pathological diagnosis, a percutaneous biopsy through the transjugal-transoval route (of Hartel) may be performed to guide the therapeutic decision.The chapter is based on the authors' experience in 50 patients who underwent the procedure over the ten past years. There was no mortality and only little (mostly transient) morbidity. Pathological diagnosis accuracy of the method revealed good, with a sensitivity of 0.83 and a specificity of 1.In the chapter the authors first recall the surgical anatomy background from personal laboratory dissections. They then describe the technical procedure, as well as the tissue harvesting method. Finally they define indications together with the decision-making process.Due to the constraint trajectory of the biopsy needle inserted through the Foramen Ovale, accessible lesions are only those located in the Meckel trigeminal Cave, the posterior sector of the cavernous sinus compartment, and the upper part of the petroclival region.The authors advise to perform this percutaneous biopsy method when imaging does not provide sufficient evidence of the pathological nature of the lesion, for therapeutic decision. Goal is to avoid unnecessary open surgery or radiosurgery, also inappropriate chemo-/radio-therapy.
Resumo:
In distributed energy production, permanent magnet synchronous generators (PMSG) are often connected to the grid via frequency converters, such as voltage source line converters. The price of the converter may constitute a large part of the costs of a generating set. Some of the permanent magnet synchronous generators with converters and traditional separately excited synchronous generators couldbe replaced by direct-on-line (DOL) non-controlled PMSGs. Small directly networkconnected generators are likely to have large markets in the area of distributed electric energy generation. Typical prime movers could be windmills, watermills and internal combustion engines. DOL PMSGs could also be applied in island networks, such as ships and oil platforms. Also various back-up power generating systems could be carried out with DOL PMSGs. The benefits would be a lower priceof the generating set and the robustness and easy use of the system. The performance of DOL PMSGs is analyzed. The electricity distribution companies have regulations that constrain the design of the generators being connected to the grid. The general guidelines and recommendations are applied in the analysis. By analyzing the results produced by the simulation model for the permanent magnet machine, the guidelines for efficient damper winding parameters for DOL PMSGs are presented. The simulation model is used to simulate grid connections and load transients. The damper winding parameters are calculated by the finite element method (FEM) and determined from experimental measurements. Three-dimensional finite element analysis (3D FEA) is carried out. The results from the simulation model and 3D FEA are compared with practical measurements from two prototype axial flux permanent magnet generators provided with damper windings. The dimensioning of the damper winding parameters is case specific. The damper winding should be dimensioned based on the moment of inertia of the generating set. It is shown that the damper winding has optimal values to reach synchronous operation in the shortest period of time after transient operation. With optimal dimensioning, interferenceon the grid is minimized.
Resumo:
Theultimate goal of any research in the mechanism/kinematic/design area may be called predictive design, ie the optimisation of mechanism proportions in the design stage without requiring extensive life and wear testing. This is an ambitious goal and can be realised through development and refinement of numerical (computational) technology in order to facilitate the design analysis and optimisation of complex mechanisms, mechanical components and systems. As a part of the systematic design methodology this thesis concentrates on kinematic synthesis (kinematic design and analysis) methods in the mechanism synthesis process. The main task of kinematic design is to find all possible solutions in the form of structural parameters to accomplish the desired requirements of motion. Main formulations of kinematic design can be broadly divided to exact synthesis and approximate synthesis formulations. The exact synthesis formulation is based in solving n linear or nonlinear equations in n variables and the solutions for the problem areget by adopting closed form classical or modern algebraic solution methods or using numerical solution methods based on the polynomial continuation or homotopy. The approximate synthesis formulations is based on minimising the approximation error by direct optimisation The main drawbacks of exact synthesis formulationare: (ia) limitations of number of design specifications and (iia) failure in handling design constraints- especially inequality constraints. The main drawbacks of approximate synthesis formulations are: (ib) it is difficult to choose a proper initial linkage and (iib) it is hard to find more than one solution. Recentformulations in solving the approximate synthesis problem adopts polynomial continuation providing several solutions, but it can not handle inequality const-raints. Based on the practical design needs the mixed exact-approximate position synthesis with two exact and an unlimited number of approximate positions has also been developed. The solutions space is presented as a ground pivot map but thepole between the exact positions cannot be selected as a ground pivot. In this thesis the exact synthesis problem of planar mechanism is solved by generating all possible solutions for the optimisation process ¿ including solutions in positive dimensional solution sets - within inequality constraints of structural parameters. Through the literature research it is first shown that the algebraic and numerical solution methods ¿ used in the research area of computational kinematics ¿ are capable of solving non-parametric algebraic systems of n equations inn variables and cannot handle the singularities associated with positive-dimensional solution sets. In this thesis the problem of positive-dimensional solutionsets is solved adopting the main principles from mathematical research area of algebraic geometry in solving parametric ( in the mathematical sense that all parameter values are considered ¿ including the degenerate cases ¿ for which the system is solvable ) algebraic systems of n equations and at least n+1 variables.Adopting the developed solution method in solving the dyadic equations in direct polynomial form in two- to three-precision-points it has been algebraically proved and numerically demonstrated that the map of the ground pivots is ambiguousand that the singularities associated with positive-dimensional solution sets can be solved. The positive-dimensional solution sets associated with the poles might contain physically meaningful solutions in the form of optimal defectfree mechanisms. Traditionally the mechanism optimisation of hydraulically driven boommechanisms is done at early state of the design process. This will result in optimal component design rather than optimal system level design. Modern mechanismoptimisation at system level demands integration of kinematic design methods with mechanical system simulation techniques. In this thesis a new kinematic design method for hydraulically driven boom mechanism is developed and integrated in mechanical system simulation techniques. The developed kinematic design method is based on the combinations of two-precision-point formulation and on optimisation ( with mathematical programming techniques or adopting optimisation methods based on probability and statistics ) of substructures using calculated criteria from the system level response of multidegree-of-freedom mechanisms. Eg. by adopting the mixed exact-approximate position synthesis in direct optimisation (using mathematical programming techniques) with two exact positions and an unlimitednumber of approximate positions the drawbacks of (ia)-(iib) has been cancelled.The design principles of the developed method are based on the design-tree -approach of the mechanical systems and the design method ¿ in principle ¿ is capable of capturing the interrelationship between kinematic and dynamic synthesis simultaneously when the developed kinematic design method is integrated with the mechanical system simulation techniques.
Resumo:
Thedirect torque control (DTC) has become an accepted vector control method besidethe current vector control. The DTC was first applied to asynchronous machines,and has later been applied also to synchronous machines. This thesis analyses the application of the DTC to permanent magnet synchronous machines (PMSM). In order to take the full advantage of the DTC, the PMSM has to be properly dimensioned. Therefore the effect of the motor parameters is analysed taking the control principle into account. Based on the analysis, a parameter selection procedure is presented. The analysis and the selection procedure utilize nonlinear optimization methods. The key element of a direct torque controlled drive is the estimation of the stator flux linkage. Different estimation methods - a combination of current and voltage models and improved integration methods - are analysed. The effect of an incorrect measured rotor angle in the current model is analysed andan error detection and compensation method is presented. The dynamic performance of an earlier presented sensorless flux estimation method is made better by improving the dynamic performance of the low-pass filter used and by adapting the correction of the flux linkage to torque changes. A method for the estimation ofthe initial angle of the rotor is presented. The method is based on measuring the inductance of the machine in several directions and fitting the measurements into a model. The model is nonlinear with respect to the rotor angle and therefore a nonlinear least squares optimization method is needed in the procedure. A commonly used current vector control scheme is the minimum current control. In the DTC the stator flux linkage reference is usually kept constant. Achieving the minimum current requires the control of the reference. An on-line method to perform the minimization of the current by controlling the stator flux linkage reference is presented. Also, the control of the reference above the base speed is considered. A new estimation flux linkage is introduced for the estimation of the parameters of the machine model. In order to utilize the flux linkage estimates in off-line parameter estimation, the integration methods are improved. An adaptive correction is used in the same way as in the estimation of the controller stator flux linkage. The presented parameter estimation methods are then used in aself-commissioning scheme. The proposed methods are tested with a laboratory drive, which consists of a commercial inverter hardware with a modified software and several prototype PMSMs.