991 resultados para Quantitative micrographic parameters
Resumo:
The Imbrie and Kipp transfer function method (IKM) and the modern analog technique (MAT) are accepted tools for quantitative paleoenvironmental reconstructions. However, no uncomplicated, flexible software has been available to apply these methods on modern computer devices. For this reason the software packages PaleoToolBox, MacTransfer, WinTransfer, MacMAT, and PanPlot have been developed. The PaleoToolBox package provides a flexible tool for the preprocessing of microfossil reference and downcore data as well as hydrographic reference parameters. It includes procedures to randomize the raw databases; to switch specific species in or out of the total species list; to establish individual ranking systems and their application on the reference and downcore databasessemi; and to convert the prepared databases into the file formats of IKM and MAT software for estimation of paleohydrographic parameters.
Resumo:
Process mineralogy provides the mineralogical information required by geometallurgists to address the inherent variation of geological data. The successful benefitiation of ores mostly depends on the ability of mineral processing to be efficiently adapted to the ore characteristics, being liberation one of the most relevant mineralogical parameters. The liberation characteristics of ores are intimately related to mineral texture. Therefore, the characterization of liberation necessarily requieres the identification and quantification of those textural features with a major bearing on mineral liberation. From this point of view grain size, bonding between mineral grains and intergrowth types are considered as the most influential textural attributes. While the quantification of grain size is a usual output of automated current technologies, information about grain boundaries and intergrowth types is usually descriptive and difficult to quantify to be included in the geometallurgical model. Aiming at the systematic and quantitative analysis of the intergrowth type within mineral particles, a new methodology based on digital image analysis has been developed. In this work, the ability of this methodology to achieve a more complete characterization of liberation is explored by the analysis of chalcopyrite in the rougher concentrate of the Kansanshi copper-gold mine (Zambia). Results obtained show that the method provides valuable textural information to achieve a better understanding of mineral behaviour during concentration processes. The potential of this method is enhanced by the fact that it provides data unavailable by current technologies. This opens up new perspectives on the quantitative analysis of mineral processing performance based on textural attributes.
Resumo:
Soil structure plays an important role in flow and transport phenomena, and a quantitative characterization of the spatial heterogeneity of the pore space geometry is beneficial for prediction of soil physical properties. Morphological features such as pore-size distribution, pore space volume or pore?solid surface can be altered by different soil management practices. Irregularity of these features and their changes can be described using fractal geometry. In this study, we focus primarily on the characterization of soil pore space as a 3D geometrical shape by fractal analysis and on the ability of fractal dimensions to differentiate between two a priori different soil structures. We analyze X-ray computed tomography (CT) images of soils samples from two nearby areas with contrasting management practices. Within these two different soil systems, samples were collected from three depths. Fractal dimensions of the pore-size distributions were different depending on soil use and averaged values also differed at each depth. Fractal dimensions of the volume and surface of the pore space were lower in the tilled soil than in the natural soil but their standard deviations were higher in the former as compared to the latter. Also, it was observed that soil use was a factor that had a statistically significant effect on fractal parameters. Fractal parameters provide useful complementary information about changes in soil structure due to changes in soil management. Read More: http://www.worldscientific.com/doi/abs/10.1142/S0218348X14400118?queryID=%24%7BresultBean.queryID%7D&
Resumo:
The rate constants for reduction of the flavoenzyme, l-lactate oxidase, and a mutant (in which alanine 95 is replaced by glycine), by a series of para-substituted mandelates, in both the 2-1H- and 2-2H- forms, have been measured by rapid reaction spectrophotometry. In all cases, significant isotope effects (1H/2H = 3–7) on the rate constants of flavin reduction were found, indicating that flavin reduction is a direct measure of α-C-H bond breakage. The rate constants show only a small influence of the electronic characteristics of the substituents, but show a good correlation when combined with some substituent volume parameters. A surprisingly good correlation is found with the molecular mass of the substrate. The results are compatible with any mechanism in which there is little development of charge in the transition state. This could be a transfer of hydride to the flavin N(5) position or a synchronous mechanism in which the α-C-H is formally abstracted as a H+ while the resulting charge is simultaneously neutralized by another event.
Resumo:
Site-directed mutagenesis and combinatorial libraries are powerful tools for providing information about the relationship between protein sequence and structure. Here we report two extensions that expand the utility of combinatorial mutagenesis for the quantitative assessment of hypotheses about the determinants of protein structure. First, we show that resin-splitting technology, which allows the construction of arbitrarily complex libraries of degenerate oligonucleotides, can be used to construct more complex protein libraries for hypothesis testing than can be constructed from oligonucleotides limited to degenerate codons. Second, using eglin c as a model protein, we show that regression analysis of activity scores from library data can be used to assess the relative contributions to the specific activity of the amino acids that were varied in the library. The regression parameters derived from the analysis of a 455-member sample from a library wherein four solvent-exposed sites in an α-helix can contain any of nine different amino acids are highly correlated (P < 0.0001, R2 = 0.97) to the relative helix propensities for those amino acids, as estimated by a variety of biophysical and computational techniques.
Resumo:
Linkage and association analyses were performed to identify loci affecting disease susceptibility by scoring previously characterized sequence variations such as microsatellites and single nucleotide polymorphisms. Lack of markers in regions of interest, as well as difficulty in adapting various methods to high-throughput settings, often limits the effectiveness of the analyses. We have adapted the Escherichia coli mismatch detection system, employing the factors MutS, MutL and MutH, for use in PCR-based, automated, high-throughput genotyping and mutation detection of genomic DNA. Optimal sensitivity and signal-to-noise ratios were obtained in a straightforward fashion because the detection reaction proved to be principally dependent upon monovalent cation concentration and MutL concentration. Quantitative relationships of the optimal values of these parameters with length of the DNA test fragment were demonstrated, in support of the translocation model for the mechanism of action of these enzymes, rather than the molecular switch model. Thus, rapid, sequence-independent optimization was possible for each new genomic target region. Other factors potentially limiting the flexibility of mismatch scanning, such as positioning of dam recognition sites within the target fragment, have also been investigated. We developed several strategies, which can be easily adapted to automation, for limiting the analysis to intersample heteroduplexes. Thus, the principal barriers to the use of this methodology, which we have designated PCR candidate region mismatch scanning, in cost-effective, high-throughput settings have been removed.
Resumo:
To quantitatively investigate the trafficking of the transmembrane lectin VIP36 and its relation to cargo-containing transport carriers (TCs), we analyzed a C-terminal fluorescent-protein (FP) fusion, VIP36-SP-FP. When expressed at moderate levels, VIP36-SP-FP localized to the endoplasmic reticulum, Golgi apparatus, and intermediate transport structures, and colocalized with epitope-tagged VIP36. Temperature shift and pharmacological experiments indicated VIP36-SP-FP recycled in the early secretory pathway, exhibiting trafficking representative of a class of transmembrane cargo receptors, including the closely related lectin ERGIC53. VIP36-SP-FP trafficking structures comprised tubules and globular elements, which translocated in a saltatory manner. Simultaneous visualization of anterograde secretory cargo and VIP36-SP-FP indicated that the globular structures were pre-Golgi carriers, and that VIP36-SP-FP segregated from cargo within the Golgi and was not included in post-Golgi TCs. Organelle-specific bleach experiments directly measured the exchange of VIP36-SP-FP between the Golgi and endoplasmic reticulum (ER). Fitting a two-compartment model to the recovery data predicted first order rate constants of 1.22 ± 0.44%/min for ER → Golgi, and 7.68 ± 1.94%/min for Golgi → ER transport, revealing a half-time of 113 ± 70 min for leaving the ER and 1.67 ± 0.45 min for leaving the Golgi, and accounting for the measured steady-state distribution of VIP36-SP-FP (13% Golgi/87% ER). Perturbing transport with AlF4− treatment altered VIP36-SP-GFP distribution and changed the rate constants. The parameters of the model suggest that relatively small differences in the first order rate constants, perhaps manifested in subtle differences in the tendency to enter distinct TCs, result in large differences in the steady-state localization of secretory components.
Resumo:
BACKGROUND Researchers evaluating angiomodulating compounds as a part of scientific projects or pre-clinical studies are often confronted with limitations of applied animal models. The rough and insufficient early-stage compound assessment without reliable quantification of the vascular response counts, at least partially, to the low transition rate to clinics. OBJECTIVE To establish an advanced, rapid and cost-effective angiogenesis assay for the precise and sensitive assessment of angiomodulating compounds using zebrafish caudal fin regeneration. It should provide information regarding the angiogenic mechanisms involved and should include qualitative and quantitative data of drug effects in a non-biased and time-efficient way. APPROACH & RESULTS Basic vascular parameters (total regenerated area, vascular projection area, contour length, vessel area density) were extracted from in vivo fluorescence microscopy images using a stereological approach. Skeletonization of the vasculature by our custom-made software Skelios provided additional parameters including "graph energy" and "distance to farthest node". The latter gave important insights into the complexity, connectivity and maturation status of the regenerating vascular network. The employment of a reference point (vascular parameters prior amputation) is unique for the model and crucial for a proper assessment. Additionally, the assay provides exceptional possibilities for correlative microscopy by combining in vivo-imaging and morphological investigation of the area of interest. The 3-way correlative microscopy links the dynamic changes in vivo with their structural substrate at the subcellular level. CONCLUSIONS The improved zebrafish fin regeneration model with advanced quantitative analysis and optional 3-way correlative morphology is a promising in vivo angiogenesis assay, well-suitable for basic research and preclinical investigations.
Resumo:
Background: False-negative interpretations of do-butamine stress echocardiography (DSE) may be associated with reduced wall stress. using measurements of contraction, we sought whether these segments were actually ischemic but unrecognized or showed normal contraction. Methods. We studied 48 patients (29 men; mean age 60 +/- 10 years) with normal regional function on the basis of standard qualitative interpretation of DSE. At coronary angiography within. 6 months of DSE, 32 were identified as having true-negative and 16 as having false-negative results of DSE. Three apical views were used to measure regional function with color Doppler tissue, integrated backscatter, and strain rate imaging. Cyclic variation of integrated backscatter was measured in 16 segments, and strain rate and peak systolic strain was calculated in 6 walls at rest and peak stress. Results. Segments with false-negative results of DSE were divided into 2 groups with and without low wall stress according to previously published cut-off values. Age, sex, left ventricular mass, left ventricular geometric pattern, and peak workload were not significantly different between patients with true and false-negative results of DSE. Importantly, no significant differences in cyclic variation and strain parameters at rest and peak stress were found among segments with true-and false-negative results of DSE with and without low wall stress. Stenosis severity had no influence on cyclic variation and strain parameters at peak stress. Conclusions: False-negative results of DSE reflect lack of ischemia rather than underinterpretation of regional left ventricular function. Quantitative markers are unlikely to increase the sensitivity of DSE.
Resumo:
In this paper we proposed a composite depth of penetration (DOP) approach to excluding bottom reflectance in mapping water quality parameters from Landsat thematic mapper (TM) data in the shallow coastal zone of Moreton Bay, Queensland, Australia. Three DOPs were calculated from TM1, TM2 and TM3, in conjunction with bathymetric data, at an accuracy ranging from +/-5% to +/-23%. These depths were used to segment the image into four DOP zones. Sixteen in situ water samples were collected concurrently with the recording of the satellite image. These samples were used to establish regression models for total suspended sediment (TSS) concentration and Secchi depth with respect to a particular DOP zone. Containing identical bands and their transformations for both parameters, the models are linear for TSS concentration, logarithmic for Secchi depth. Based on these models, TSS concentration and Secchi depth were mapped from the satellite image in respective DOP zones. Their mapped patterns are consistent with the in situ observed ones. Spatially, overestimation and underestimation of the parameters are restricted to localised areas but related to the absolute value of the parameters. The mapping was accomplished more accurately using multiple DOP zones than using a single zone in shallower areas. The composite DOP approach enables the mapping to be extended to areas as shallow as <3 m. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
A common problem encountered during the development of MS methods for the quantitation of small organic molecules by LGMS is the formation of non-covalently bound species or adducts in the electrospray interface. Often the population of the molecular ion is insignificant compared to those of all other forms of the analyte produced in the electrospray, making it difficult to obtain the sensitivity required for accurate quantitation. We have investigated the effects of the following variables: orifice potential, nebulizer gas flow, temperature, solvent composition and the sample pH on the relative distributions of ions of the types MH+, MNa+, MNH+, and 2MNa(+), where M represents a 4 small organic molecule: BAY 11-7082 ((E)-3-[4-methylphenylsulfonyl]-2-propenenitrile). Orifice potential, solvent composition and the sample pH had the greatest influence on the relative distributions of these ions, making these parameters the most useful for optimizing methods for the quantitation of small molecules.
Resumo:
We report methods for correcting the photoluminescence emission and excitation spectra of highly absorbing samples for re-absorption and inner filter effects. We derive the general form of the correction, and investigate various methods for determining the parameters. Additionally, the correction methods are tested with highly absorbing fluorescein and melanin (broadband absorption) solutions; the expected linear relationships between absorption and emission are recovered upon application of the correction, indicating that the methods are valid. These procedures allow accurate quantitative analysis of the emission of low quantum yield samples (such as melanin) at concentrations where absorption is significant. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
High-performance liquid chromatography coupled by an electrospray ion source to a tandem mass spectrometer (HPLC-EST-MS/ MS) is the current analytical method of choice for quantitation of analytes in biological matrices. With HPLC-ESI-MS/MS having the characteristics of high selectivity, sensitivity, and throughput, this technology is being increasingly used in the clinical laboratory. An important issue to be addressed in method development, validation, and routine use of HPLC-ESI-MS/MS is matrix effects. Matrix effects are the alteration of ionization efficiency by the presence of coeluting substances. These effects are unseen in the chromatograrn but have deleterious impact on methods accuracy and sensitivity. The two common ways to assess matrix effects are either by the postextraction addition method or the postcolumn infusion method. To remove or minimize matrix effects, modification to the sample extraction methodology and improved chromatographic separation must be performed. These two parameters are linked together and form the basis of developing a successful and robust quantitative HPLC-EST-MS/MS method. Due to the heterogenous nature of the population being studied, the variability of a method must be assessed in samples taken from a variety of subjects. In this paper, the major aspects of matrix effects are discussed with an approach to address matrix effects during method validation proposed. (c) 2004 The Canadian Society of Clinical Chemists. All rights reserved.
Resumo:
Quantitative genetics provides a powerful framework for studying phenotypic evolution and the evolution of adaptive genetic variation. Central to the approach is G, the matrix of additive genetic variances and covariances. G summarizes the genetic basis of the traits and can be used to predict the phenotypic response to multivariate selection or to drift. Recent analytical and computational advances have improved both the power and the accessibility of the necessary multivariate statistics. It is now possible to study the relationships between G and other evolutionary parameters, such as those describing the mutational input, the shape and orientation of the adaptive landscape, and the phenotypic divergence among populations. At the same time, we are moving towards a greater understanding of how the genetic variation summarized by G evolves. Computer simulations of the evolution of G, innovations in matrix comparison methods, and rapid development of powerful molecular genetic tools have all opened the way for dissecting the interaction between allelic variation and evolutionary process. Here I discuss some current uses of G, problems with the application of these approaches, and identify avenues for future research.
Resumo:
Numerical modelling is a valuable tool for simulating the fundamental processes that take place during a heating. The models presented in this paper have enabled a quantitative assessment of the effects of initial pile temperature, pile size and mass and coal particle size on the development of a heating. All of these parameters have a certain criticality in the coal self-heating process.