860 resultados para comparison method
Resumo:
Objectives: Left atrial (LA) volume (LAV) is a prognostically important biomarker for diastolic dysfunction, but its reproducibility on repeated testing is not well defined. LA assessment with 3-dimensional. (3D) echocardiography (3DE) has been validated against magnetic resonance imaging, and we sought to assess whether this was superior to existing measurements for sequential echocardiographic follow-up. Methods: Patients (n = 100; 81 men; age 56 +/- 14 years) presenting for LA evaluation were studied with M-mode (MM) echocardiography, 2-dimensional (2D) echocardiography, and 3DE. Test-retest variation was performed by a complete restudy by a separate sonographer within 1 hour without alteration of hemodynamics or therapy. In all, 20 patients were studied for interobserver and intraobserver variation. LAVs were calculated by using M-mode diameter and planimetered atrial area in the apical. 4-chamber view to calculate an assumed sphere, as were prolate ellipsoid, Simpson's biplane, and biplane area-length methods. All were compared with 3DE. Results: The average LAV was 72 +/- 27 mL by 3DE. There was significant underestimation of LAV by M-mode (35 +/- 20 mL, r = 0.66, P < .01). The 3DE and various 2D echocardiographic techniques were well correlated: LA planimetry (85 +/- 38 mL, r = 0.77, P < .01), prolate ellipsoid (73 +/- 36 mL, r = 0.73, P = .04), area-length (64 +/- 30 mL, r = 0.74, P < .01), and Simpson's biplane (69 +/- 31 mL, r = 0.78, P = .06). Test-retest variation for 3DE was most favorable (r = 0.98, P < .01), with the prolate ellipsoid method showing most variation. Interobserver agreement between measurements was best for 3DE (r = 0.99, P < .01), with M-mode the worst (r = 0.89, P < .01). Intraobserver results were similar to interobserver, the best correlation for 3DE (r = 0.99, P < .01), with LA planimetry the worst (r = 0.91, P < .01). Conclusions. The 2D measurements correlate closely with 3DE. Follow-up assessment in daily practice appears feasible and reliable with both 2D and 3D approaches.
Resumo:
In this paper we apply a new method for the determination of surface area of carbonaceous materials, using the local surface excess isotherms obtained from the Grand Canonical Monte Carlo simulation and a concept of area distribution in terms of energy well-depth of solid–fluid interaction. The range of this well-depth considered in our GCMC simulation is from 10 to 100 K, which is wide enough to cover all carbon surfaces that we dealt with (for comparison, the well-depth for perfect graphite surface is about 58 K). Having the set of local surface excess isotherms and the differential area distribution, the overall adsorption isotherm can be obtained in an integral form. Thus, given the experimental data of nitrogen or argon adsorption on a carbon material, the differential area distribution can be obtained from the inversion process, using the regularization method. The total surface area is then obtained as the area of this distribution. We test this approach with a number of data in the literature, and compare our GCMC-surface area with that obtained from the classical BET method. In general, we find that the difference between these two surface areas is about 10%, indicating the need to reliably determine the surface area with a very consistent method. We, therefore, suggest the approach of this paper as an alternative to the BET method because of the long-recognized unrealistic assumptions used in the BET theory. Beside the surface area obtained by this method, it also provides information about the differential area distribution versus the well-depth. This information could be used as a microscopic finger-print of the carbon surface. It is expected that samples prepared from different precursors and different activation conditions will have distinct finger-prints. We illustrate this with Cabot BP120, 280 and 460 samples, and the differential area distributions obtained from the adsorption of argon at 77 K and nitrogen also at 77 K have exactly the same patterns, suggesting the characteristics of this carbon.
Resumo:
We compared growth rates of the lemon shark, Negaprion brevirostris, from Bimini, Bahamas and the Marquesas Keys (MK), Florida using data obtained in a multi-year annual census. We marked new neonate and juvenile sharks with unique electronic identity tags in Bimini and in the MK we tagged neonate and juvenile sharks. Sharks were tagged with tiny, subcutaneous transponders, a type of tagging thought to cause little, if any disruption to normal growth patterns when compared to conventional external tagging. Within the first 2 years of this project, no age data were recorded for sharks caught for the first time in Bimini. Therefore, we applied and tested two methods of age analysis: ( 1) a modified 'minimum convex polygon' method and ( 2) a new age-assigning method, the 'cut-off technique'. The cut-off technique proved to be the more suitable one, enabling us to identify the age of 134 of the 642 previously unknown aged sharks. This maximised the usable growth data included in our analysis. Annual absolute growth rates of juvenile, nursery-bound lemon sharks were almost constant for the two Bimini nurseries and can be best described by a simple linear model ( growth data was only available for age-0 sharks in the MK). Annual absolute growth for age-0 sharks was much greater in the MK than in either the North Sound (NS) and Shark Land (SL) at Bimini. Growth of SL sharks was significantly faster during the first 2 years of life than of the sharks in the NS population. However, in MK, only growth in the first year was considered to be reliably estimated due to low recapture rates. Analyses indicated no significant differences in growth rates between males and females for any area.
Resumo:
Introduction - Group learning has been used to enhance deep (long-term) learning and promote life skills, such as decision making, communication, and interpersonal skills. However, with increasing multiculturalism in higher education, there is little information available as to the acceptance of this form of learning by Asian students or as to its value to them. Methodology - Group-learning projects, incorporating a seminar presentation, were used in first-year veterinary anatomical science classes over two consecutive years (2003 and 2004) at the School of Veterinary Science, University of Queensland. Responses of Australian and Asian students to survey forms evaluating the learning experience were analyzed and compared. Results - All students responded positively to the group learning, indicating that it was a useful learning experience and a great method for meeting colleagues. There were no significant differences between Asian and Australian students in overall responses to the survey evaluating the learning experience, except where Asian students responded significantly higher than Australian students in identifying specific skills that needed improving. Conclusions - Group learning can be successfully used in multicultural teaching to enhance deep learning. This form of learning helps to remove cultural barriers and establish a platform for continued successful group learning throughout the program.
Resumo:
In this paper, numerical simulations are used in an attempt to find optimal Source profiles for high frequency radiofrequency (RF) volume coils. Biologically loaded, shielded/unshielded circular and elliptical birdcage coils operating at 170 MHz, 300 MHz and 470 MHz are modelled using the FDTD method for both 2D and 3D cases. Taking advantage of the fact that some aspects of the electromagnetic system are linear, two approaches have been proposed for the determination of the drives for individual elements in the RF resonator. The first method is an iterative optimization technique with a kernel for the evaluation of RF fields inside an imaging plane of a human head model using pre-characterized sensitivity profiles of the individual rungs of a resonator; the second method is a regularization-based technique. In the second approach, a sensitivity matrix is explicitly constructed and a regularization procedure is employed to solve the ill-posed problem. Test simulations show that both methods can improve the B-1-field homogeneity in both focused and non-focused scenarios. While the regularization-based method is more efficient, the first optimization method is more flexible as it can take into account other issues such as controlling SAR or reshaping the resonator structures. It is hoped that these schemes and their extensions will be useful for the determination of multi-element RF drives in a variety of applications.
Resumo:
The paper presents a method for designing circular, shielded biplanar coils that can generate any desired field. A particular feature of these coils is that the target field may be located asymmetrically within the coil. A transverse component of the magnetic field produced by the coil is made to match a prescribed target field over the surfaces of two concentric spheres (the diameter of spherical volume) that define the target field location. The paper shows winding patterns and fields for several gradient and shim coils. It examines the effect that the finite coil size has on the winding patterns, using a Fourier-transform calculation for comparison.
Resumo:
Forty-four soils from under native vegetation and a range of management practices following clearing were analysed for ‘labile’ organic carbon (OC) using both the particulate organic carbon (POC) and the 333 mm KmnO4 (MnoxC) methods. Although there was some correlation between the 2 methods, the POC method was more sensitive by about a factor of 2 to rapid loss in OC as a result of management or land-use change. Unlike the POC method, the MnoxC method was insensitive to rapid gains in TOC following establishment of pasture on degraded soil. The MnoxC method was shown to be particularly sensitive to the presence of lignin or lignin-like compounds and therefore is likely to be very sensitive to the nature of the vegetation present at or near the time of sampling and explains the insensitivity of this method to OC gain under pasture. The presence of charcoal is an issue with both techniques, but whereas the charcoal contribution to the POC fraction can be assessed, the MnoxC method cannot distinguish between charcoal and most biomolecules found in soil. Because of these limitations, the MnoxC method should not be applied indiscriminately across different soil types and management practices.
Resumo:
We apply the projected Gross-Pitaevskii equation (PGPE) formalism to the experimental problem of the shift in critical temperature T-c of a harmonically confined Bose gas as reported in Gerbier , Phys. Rev. Lett. 92, 030405 (2004). The PGPE method includes critical fluctuations and we find the results differ from various mean-field theories, and are in best agreement with experimental data. To unequivocally observe beyond mean-field effects, however, the experimental precision must either improve by an order of magnitude, or consider more strongly interacting systems. This is the first application of a classical field method to make quantitative comparison with experiment.
Resumo:
Estimates of microbial crude protein (MCP) production by ruminants, using a method based on the excretion of purine derivatives in urine, require an estimate of the excretion of endogenous purine derivatives (PD) by the animal. Current methods allocate a single value to all cattle. An experiment was carried out to compare the endogenous PD excretion in Bos taurus and high-content B. indicus ( hereafter, B. indicus) cattle. Five Holstein - Friesian ( B. taurus) and 5 Brahman (> 75% B. indicus) steers ( mean liveweight 326 +/- 3.0 kg) were used in a fasting study. Steers were fed a low-quality buffel grass (Cenchrus ciliaris; 59.4 g crude protein/kg dry matter) hay at estimated maintenance requirements for 19 days, after which hay intake was incrementally reduced for 2 days and the steers were fasted for 7 days. The excretion of PD in urine was measured daily for the last 6 days of the fasting period and the mean represented the daily endogenous PD excretion. Excretion of endogenous PD in the urine of B. indicus steers was less than half that of the B. taurus steers ( 190 mu mol/kg W-0.75. day v. 414 mu mol/kg W-0.75. day; combined s.e. 37.2 mu mol/kg W-0.75. day; P< 0.001). It was concluded that the use of a single value for endogenous PD excretion is inappropriate for use in MCP estimations and that subspecies-specific values would improve precision.
Resumo:
Traditional vaccines consisting of whole attenuated microorganisms, killed microorganisms, or microbial components, administered with an adjuvant (e.g. alum), have been proved to be extremely successful. However, to develop new vaccines, or to improve upon current vaccines, new vaccine development techniques are required. Peptide vaccines offer the capacity to administer only the minimal microbial components necessary to elicit appropriate immune responses, minimizing the risk of vaccination associated adverse effects, and focusing the immune response toward important antigens. Peptide vaccines, however, are generally poorly immunogenic, necessitating administration with powerful, and potentially toxic adjuvants. The attachment of lipids to peptide antigens has been demonstrated as a potentially safe method for adjuvanting peptide epitopes. The lipid core peptide (LCP) system, which incorporates a lipidic adjuvant, carrier, and peptide epitopes into a single molecular entity, has been demonstrated to boost immunogenicity of attached peptide epitopes without the need for additional adjuvants. The synthesis of LCP systems normally yields a product that cannot be purified to homogeneity. The current study describes the development of methods for the synthesis of highly pure LCP analogs using native chemical ligation. Because of the highly lipophilic nature of the LCP lipid adjuvant, difficulties (e.g. poor solubility) were experienced with the ligation reactions. The addition of organic solvents to the ligation buffer solubilized lipidic species, but did not result in successful ligation reactions. In comparison, the addition of approximately 1% (w/v) sodium dodecyl sulfate (SDS) proved successful, enabling the synthesis of two highly pure, tri-epitopic Streptococcus pyogenes LCP analogs. Subcutaneous immunization of B10.BR (H-2(k)) mice with one of these vaccines, without the addition of any adjuvant, elicited high levels of systemic IgG antibodies against each of the incorporated peptides. Copyright (c) 2006 European Peptide Society and John Wiley & Sons, Ltd.
Resumo:
Therapeutic monitoring with dosage individualization of sirolimus drug therapy is standard clinical practice for organ transplant recipients. For several years sirolimus monitoring has been restricted as a result of lack of an immunoassay. The recent reintroduction of the microparticle enzyme immunoassay (MEIA (R)) for sirolimus on the IMx (R) analyser has the potential to address this situation. This Study, using patient samples, has compared the MEIA (R) sirolimus method with an established HPLC-tandem mass spectrometry method (HPLC-MS/MS). An established HPLC-UV assay was used for independent cross-validation. For quality control materials (5, 11, 22 mu g/L), the MEIA (R) showed acceptable validation criteria based on intra-and inter-run precision (CV) and accuracy (bias) of < 8% and < 13%, respectively. The lower limit of quantitation was found to be approximately 3 mu g/L. The performance of the immunoassay was compared with HPLC-MS/MS using EDTA whole-blood samples obtained from various types of organ transplant recipients (n = 116). The resultant Deming regression line was: MEIA = 1.3 x HPLC-MS/MS+ 1.3 (r = 0.967, s(y/x) = 1) with a mean bias of 49.2% +/- 23.1 % (range, -2.4% to 128%; P < 0.001). The reason for the large and variable bias was not explored in this study, but the sirolimus-metabolite cross-reactivity with the MEIA (R) antibody could be a substantive contributing factor. Whereas the MEIA (R) sirolimus method may be an adjunct to sirolimus dosage individualization in transplant recipients, users must consider the implications of the substantial and variable bias when interpreting results. In selected patients where difficult clinical issues arise, reference to a specific chromatographic method may be required.
Resumo:
Saturated phospholipids (PCs), particularly dipalmitoylphosphatidylcholine (DPPC), predominate in surfactant lining the alveoli, although little is known about the relationship between saturated and unsaturated PCs on the outer surface of the lung, the pleura. Seven healthy cats were anesthetized and a bronchoalveolar lavage (BAL) was performed, immediately followed by a pleural lavage (PL). Lipid was extracted from lavage fluid and then analyzed for saturated, primarily dipalmitoylphosphatidylcholine (DPPC), and unsaturated PC species using high-performance liquid chromatography (HPLC) with combined fluorescence and ultraviolet detection. Dilution of epithelial lining fluid (ELF) in lavage fluids was corrected for using the urea method. The concentration of DPPC in BAL fluid (85.3 +/- 15.7 mu g/mL) was significantly higher (P=0.021) than unsaturated PCs (similar to 40 mu g/mL). However, unsaturated PCs (similar to 34 mu g/mL), particularly stearoyl-linoleoyl-phosphatidylcholine (SLPC; 17.4 +/- 6.8), were significantly higher (P = 0.021) than DPPC (4.3 +/- 1.8 mu g/mL) in PL fluid. These results show that unsaturated PCs appear functionally more important in the pleural cavity, which may have implications for surfactant replenishment following pleural disease or thoracic surgery. (c) 2005 Published by Elsevier Ltd.
Resumo:
Background: Determination of the subcellular location of a protein is essential to understanding its biochemical function. This information can provide insight into the function of hypothetical or novel proteins. These data are difficult to obtain experimentally but have become especially important since many whole genome sequencing projects have been finished and many resulting protein sequences are still lacking detailed functional information. In order to address this paucity of data, many computational prediction methods have been developed. However, these methods have varying levels of accuracy and perform differently based on the sequences that are presented to the underlying algorithm. It is therefore useful to compare these methods and monitor their performance. Results: In order to perform a comprehensive survey of prediction methods, we selected only methods that accepted large batches of protein sequences, were publicly available, and were able to predict localization to at least nine of the major subcellular locations (nucleus, cytosol, mitochondrion, extracellular region, plasma membrane, Golgi apparatus, endoplasmic reticulum (ER), peroxisome, and lysosome). The selected methods were CELLO, MultiLoc, Proteome Analyst, pTarget and WoLF PSORT. These methods were evaluated using 3763 mouse proteins from SwissProt that represent the source of the training sets used in development of the individual methods. In addition, an independent evaluation set of 2145 mouse proteins from LOCATE with a bias towards the subcellular localization underrepresented in SwissProt was used. The sensitivity and specificity were calculated for each method and compared to a theoretical value based on what might be observed by random chance. Conclusion: No individual method had a sufficient level of sensitivity across both evaluation sets that would enable reliable application to hypothetical proteins. All methods showed lower performance on the LOCATE dataset and variable performance on individual subcellular localizations was observed. Proteins localized to the secretory pathway were the most difficult to predict, while nuclear and extracellular proteins were predicted with the highest sensitivity.
Resumo:
Calculating the potentials on the heart’s epicardial surface from the body surface potentials constitutes one form of inverse problems in electrocardiography (ECG). Since these problems are ill-posed, one approach is to use zero-order Tikhonov regularization, where the squared norms of both the residual and the solution are minimized, with a relative weight determined by the regularization parameter. In this paper, we used three different methods to choose the regularization parameter in the inverse solutions of ECG. The three methods include the L-curve, the generalized cross validation (GCV) and the discrepancy principle (DP). Among them, the GCV method has received less attention in solutions to ECG inverse problems than the other methods. Since the DP approach needs knowledge of norm of noises, we used a model function to estimate the noise. The performance of various methods was compared using a concentric sphere model and a real geometry heart-torso model with a distribution of current dipoles placed inside the heart model as the source. Gaussian measurement noises were added to the body surface potentials. The results show that the three methods all produce good inverse solutions with little noise; but, as the noise increases, the DP approach produces better results than the L-curve and GCV methods, particularly in the real geometry model. Both the GCV and L-curve methods perform well in low to medium noise situations.
Resumo:
Government agencies responsible for riparian environments are assessing the combined utility of field survey and remote sensing for mapping and monitoring indicators of riparian zone condition. The objective of this work was to compare the Tropical Rapid Appraisal of Riparian Condition (TRARC) method to a satellite image based approach. TRARC was developed for rapid assessment of the environmental condition of savanna riparian zones. The comparison assessed mapping accuracy, representativeness of TRARC assessment, cost-effectiveness, and suitability for multi-temporal analysis. Two multi-spectral QuickBird images captured in 2004 and 2005 and coincident field data covering sections of the Daly River in the Northern Territory, Australia were used in this work. Both field and image data were processed to map riparian health indicators (RHIs) including percentage canopy cover, organic litter, canopy continuity, stream bank stability, and extent of tree clearing. Spectral vegetation indices, image segmentation and supervised classification were used to produce RHI maps. QuickBird image data were used to examine if the spatial distribution of TRARC transects provided a representative sample of ground based RHI measurements. Results showed that TRARC transects were required to cover at least 3% of the study area to obtain a representative sample. The mapping accuracy and costs of the image based approach were compared to those of the ground based TRARC approach. Results proved that TRARC was more cost-effective at smaller scales (1-100km), while image based assessment becomes more feasible at regional scales (100-1000km). Finally, the ability to use both the image and field based approaches for multi-temporal analysis of RHIs was assessed. Change detection analysis demonstrated that image data can provide detailed information on gradual change, while the TRARC method was only able to identify more gross scale changes. In conclusion, results from both methods were considered to complement each other if used at appropriate spatial scales.