992 resultados para Quantitative parameters


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Linkage and association analyses were performed to identify loci affecting disease susceptibility by scoring previously characterized sequence variations such as microsatellites and single nucleotide polymorphisms. Lack of markers in regions of interest, as well as difficulty in adapting various methods to high-throughput settings, often limits the effectiveness of the analyses. We have adapted the Escherichia coli mismatch detection system, employing the factors MutS, MutL and MutH, for use in PCR-based, automated, high-throughput genotyping and mutation detection of genomic DNA. Optimal sensitivity and signal-to-noise ratios were obtained in a straightforward fashion because the detection reaction proved to be principally dependent upon monovalent cation concentration and MutL concentration. Quantitative relationships of the optimal values of these parameters with length of the DNA test fragment were demonstrated, in support of the translocation model for the mechanism of action of these enzymes, rather than the molecular switch model. Thus, rapid, sequence-independent optimization was possible for each new genomic target region. Other factors potentially limiting the flexibility of mismatch scanning, such as positioning of dam recognition sites within the target fragment, have also been investigated. We developed several strategies, which can be easily adapted to automation, for limiting the analysis to intersample heteroduplexes. Thus, the principal barriers to the use of this methodology, which we have designated PCR candidate region mismatch scanning, in cost-effective, high-throughput settings have been removed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To quantitatively investigate the trafficking of the transmembrane lectin VIP36 and its relation to cargo-containing transport carriers (TCs), we analyzed a C-terminal fluorescent-protein (FP) fusion, VIP36-SP-FP. When expressed at moderate levels, VIP36-SP-FP localized to the endoplasmic reticulum, Golgi apparatus, and intermediate transport structures, and colocalized with epitope-tagged VIP36. Temperature shift and pharmacological experiments indicated VIP36-SP-FP recycled in the early secretory pathway, exhibiting trafficking representative of a class of transmembrane cargo receptors, including the closely related lectin ERGIC53. VIP36-SP-FP trafficking structures comprised tubules and globular elements, which translocated in a saltatory manner. Simultaneous visualization of anterograde secretory cargo and VIP36-SP-FP indicated that the globular structures were pre-Golgi carriers, and that VIP36-SP-FP segregated from cargo within the Golgi and was not included in post-Golgi TCs. Organelle-specific bleach experiments directly measured the exchange of VIP36-SP-FP between the Golgi and endoplasmic reticulum (ER). Fitting a two-compartment model to the recovery data predicted first order rate constants of 1.22 ± 0.44%/min for ER → Golgi, and 7.68 ± 1.94%/min for Golgi → ER transport, revealing a half-time of 113 ± 70 min for leaving the ER and 1.67 ± 0.45 min for leaving the Golgi, and accounting for the measured steady-state distribution of VIP36-SP-FP (13% Golgi/87% ER). Perturbing transport with AlF4− treatment altered VIP36-SP-GFP distribution and changed the rate constants. The parameters of the model suggest that relatively small differences in the first order rate constants, perhaps manifested in subtle differences in the tendency to enter distinct TCs, result in large differences in the steady-state localization of secretory components.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Researchers evaluating angiomodulating compounds as a part of scientific projects or pre-clinical studies are often confronted with limitations of applied animal models. The rough and insufficient early-stage compound assessment without reliable quantification of the vascular response counts, at least partially, to the low transition rate to clinics. OBJECTIVE To establish an advanced, rapid and cost-effective angiogenesis assay for the precise and sensitive assessment of angiomodulating compounds using zebrafish caudal fin regeneration. It should provide information regarding the angiogenic mechanisms involved and should include qualitative and quantitative data of drug effects in a non-biased and time-efficient way. APPROACH & RESULTS Basic vascular parameters (total regenerated area, vascular projection area, contour length, vessel area density) were extracted from in vivo fluorescence microscopy images using a stereological approach. Skeletonization of the vasculature by our custom-made software Skelios provided additional parameters including "graph energy" and "distance to farthest node". The latter gave important insights into the complexity, connectivity and maturation status of the regenerating vascular network. The employment of a reference point (vascular parameters prior amputation) is unique for the model and crucial for a proper assessment. Additionally, the assay provides exceptional possibilities for correlative microscopy by combining in vivo-imaging and morphological investigation of the area of interest. The 3-way correlative microscopy links the dynamic changes in vivo with their structural substrate at the subcellular level. CONCLUSIONS The improved zebrafish fin regeneration model with advanced quantitative analysis and optional 3-way correlative morphology is a promising in vivo angiogenesis assay, well-suitable for basic research and preclinical investigations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: False-negative interpretations of do-butamine stress echocardiography (DSE) may be associated with reduced wall stress. using measurements of contraction, we sought whether these segments were actually ischemic but unrecognized or showed normal contraction. Methods. We studied 48 patients (29 men; mean age 60 +/- 10 years) with normal regional function on the basis of standard qualitative interpretation of DSE. At coronary angiography within. 6 months of DSE, 32 were identified as having true-negative and 16 as having false-negative results of DSE. Three apical views were used to measure regional function with color Doppler tissue, integrated backscatter, and strain rate imaging. Cyclic variation of integrated backscatter was measured in 16 segments, and strain rate and peak systolic strain was calculated in 6 walls at rest and peak stress. Results. Segments with false-negative results of DSE were divided into 2 groups with and without low wall stress according to previously published cut-off values. Age, sex, left ventricular mass, left ventricular geometric pattern, and peak workload were not significantly different between patients with true and false-negative results of DSE. Importantly, no significant differences in cyclic variation and strain parameters at rest and peak stress were found among segments with true-and false-negative results of DSE with and without low wall stress. Stenosis severity had no influence on cyclic variation and strain parameters at peak stress. Conclusions: False-negative results of DSE reflect lack of ischemia rather than underinterpretation of regional left ventricular function. Quantitative markers are unlikely to increase the sensitivity of DSE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we proposed a composite depth of penetration (DOP) approach to excluding bottom reflectance in mapping water quality parameters from Landsat thematic mapper (TM) data in the shallow coastal zone of Moreton Bay, Queensland, Australia. Three DOPs were calculated from TM1, TM2 and TM3, in conjunction with bathymetric data, at an accuracy ranging from +/-5% to +/-23%. These depths were used to segment the image into four DOP zones. Sixteen in situ water samples were collected concurrently with the recording of the satellite image. These samples were used to establish regression models for total suspended sediment (TSS) concentration and Secchi depth with respect to a particular DOP zone. Containing identical bands and their transformations for both parameters, the models are linear for TSS concentration, logarithmic for Secchi depth. Based on these models, TSS concentration and Secchi depth were mapped from the satellite image in respective DOP zones. Their mapped patterns are consistent with the in situ observed ones. Spatially, overestimation and underestimation of the parameters are restricted to localised areas but related to the absolute value of the parameters. The mapping was accomplished more accurately using multiple DOP zones than using a single zone in shallower areas. The composite DOP approach enables the mapping to be extended to areas as shallow as <3 m. (C) 2004 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A common problem encountered during the development of MS methods for the quantitation of small organic molecules by LGMS is the formation of non-covalently bound species or adducts in the electrospray interface. Often the population of the molecular ion is insignificant compared to those of all other forms of the analyte produced in the electrospray, making it difficult to obtain the sensitivity required for accurate quantitation. We have investigated the effects of the following variables: orifice potential, nebulizer gas flow, temperature, solvent composition and the sample pH on the relative distributions of ions of the types MH+, MNa+, MNH+, and 2MNa(+), where M represents a 4 small organic molecule: BAY 11-7082 ((E)-3-[4-methylphenylsulfonyl]-2-propenenitrile). Orifice potential, solvent composition and the sample pH had the greatest influence on the relative distributions of these ions, making these parameters the most useful for optimizing methods for the quantitation of small molecules.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report methods for correcting the photoluminescence emission and excitation spectra of highly absorbing samples for re-absorption and inner filter effects. We derive the general form of the correction, and investigate various methods for determining the parameters. Additionally, the correction methods are tested with highly absorbing fluorescein and melanin (broadband absorption) solutions; the expected linear relationships between absorption and emission are recovered upon application of the correction, indicating that the methods are valid. These procedures allow accurate quantitative analysis of the emission of low quantum yield samples (such as melanin) at concentrations where absorption is significant. (c) 2004 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-performance liquid chromatography coupled by an electrospray ion source to a tandem mass spectrometer (HPLC-EST-MS/ MS) is the current analytical method of choice for quantitation of analytes in biological matrices. With HPLC-ESI-MS/MS having the characteristics of high selectivity, sensitivity, and throughput, this technology is being increasingly used in the clinical laboratory. An important issue to be addressed in method development, validation, and routine use of HPLC-ESI-MS/MS is matrix effects. Matrix effects are the alteration of ionization efficiency by the presence of coeluting substances. These effects are unseen in the chromatograrn but have deleterious impact on methods accuracy and sensitivity. The two common ways to assess matrix effects are either by the postextraction addition method or the postcolumn infusion method. To remove or minimize matrix effects, modification to the sample extraction methodology and improved chromatographic separation must be performed. These two parameters are linked together and form the basis of developing a successful and robust quantitative HPLC-EST-MS/MS method. Due to the heterogenous nature of the population being studied, the variability of a method must be assessed in samples taken from a variety of subjects. In this paper, the major aspects of matrix effects are discussed with an approach to address matrix effects during method validation proposed. (c) 2004 The Canadian Society of Clinical Chemists. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quantitative genetics provides a powerful framework for studying phenotypic evolution and the evolution of adaptive genetic variation. Central to the approach is G, the matrix of additive genetic variances and covariances. G summarizes the genetic basis of the traits and can be used to predict the phenotypic response to multivariate selection or to drift. Recent analytical and computational advances have improved both the power and the accessibility of the necessary multivariate statistics. It is now possible to study the relationships between G and other evolutionary parameters, such as those describing the mutational input, the shape and orientation of the adaptive landscape, and the phenotypic divergence among populations. At the same time, we are moving towards a greater understanding of how the genetic variation summarized by G evolves. Computer simulations of the evolution of G, innovations in matrix comparison methods, and rapid development of powerful molecular genetic tools have all opened the way for dissecting the interaction between allelic variation and evolutionary process. Here I discuss some current uses of G, problems with the application of these approaches, and identify avenues for future research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Numerical modelling is a valuable tool for simulating the fundamental processes that take place during a heating. The models presented in this paper have enabled a quantitative assessment of the effects of initial pile temperature, pile size and mass and coal particle size on the development of a heating. All of these parameters have a certain criticality in the coal self-heating process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pulse transit time (PTT) is a non-invasive measure, defined as time taken for the pulse pressure waves to travel from the R-wave of electrocardiogram to a selected peripheral site. Baseline PTT value is known to be influenced by physiologic variables like heart rate (HR), blood pressure (BP) and arterial compliance (AC). However, few quantitative data are available describing the factors which can influence PTT measurements in a child during breathing. The aim of this study was to investigate the effects of changes in breathing efforts on PTT baseline and fluctuations. Two different inspiratory resistive loading (IRL) devices were used to simulate loaded breathing in order to induce these effects. It is known that HR can influence the normative PTT value however the effect of HR variability (HRV) is not well-studied. Two groups of 3 healthy children ( 0.05) HR changes during all test activities. Results showed that HRV is not the sole contributor to PTT variations and suggest that changes in other physiologic parameters are also equally important. Hence, monitoring PTT measurement can be indicative of these associated changes during tidal or increased breathing efforts in healthy children.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis presents a two-dimensional Risk Assessment Method (RAM) where the assessment of risk to the groundwater resources incorporates both the quantification of the probability of the occurrence of contaminant source terms, as well as the assessment of the resultant impacts. The approach emphasizes the need for a greater dependency on the potential pollution sources, rather than the traditional approach where assessment is based mainly on the intrinsic geo-hydrologic parameters. The risk is calculated using Monte Carlo simulation methods whereby random pollution events were generated to the same distribution as historically occurring events or a priori potential probability distribution. Integrated mathematical models then simulate contaminant concentrations at the predefined monitoring points within the aquifer. The spatial and temporal distributions of the concentrations were calculated from repeated realisations, and the number of times when a user defined concentration magnitude was exceeded is quantified as a risk. The method was setup by integrating MODFLOW-2000, MT3DMS and a FORTRAN coded risk model, and automated, using a DOS batch processing file. GIS software was employed in producing the input files and for the presentation of the results. The functionalities of the method, as well as its sensitivities to the model grid sizes, contaminant loading rates, length of stress periods, and the historical frequencies of occurrence of pollution events were evaluated using hypothetical scenarios and a case study. Chloride-related pollution sources were compiled and used as indicative potential contaminant sources for the case study. At any active model cell, if a random generated number is less than the probability of pollution occurrence, then the risk model will generate synthetic contaminant source term as an input into the transport model. The results of the applications of the method are presented in the form of tables, graphs and spatial maps. Varying the model grid sizes indicates no significant effects on the simulated groundwater head. The simulated frequency of daily occurrence of pollution incidents is also independent of the model dimensions. However, the simulated total contaminant mass generated within the aquifer, and the associated volumetric numerical error appear to increase with the increasing grid sizes. Also, the migration of contaminant plume advances faster with the coarse grid sizes as compared to the finer grid sizes. The number of daily contaminant source terms generated and consequently the total mass of contaminant within the aquifer increases in a non linear proportion to the increasing frequency of occurrence of pollution events. The risk of pollution from a number of sources all occurring by chance together was evaluated, and quantitatively presented as risk maps. This capability to combine the risk to a groundwater feature from numerous potential sources of pollution proved to be a great asset to the method, and a large benefit over the contemporary risk and vulnerability methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The software underpinning today’s IT systems needs to adapt dynamically and predictably to rapid changes in system workload, environment and objectives. We describe a software framework that achieves such adaptiveness for IT systems whose components can be modelled as Markov chains. The framework comprises (i) an autonomic architecture that uses Markov-chain quantitative analysis to dynamically adjust the parameters of an IT system in line with its state, environment and objectives; and (ii) a method for developing instances of this architecture for real-world systems. Two case studies are presented that use the framework successfully for the dynamic power management of disk drives, and for the adaptive management of cluster availability within data centres, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This investigation is in two parts, theory and experimental verification. (1) Theoretical Study In this study it is, for obvious reasons, necessary to analyse the concept of formability first. For the purpose of the present investigation it is sufficient to define the four aspects of formability as follows: (a) the formability of the material at a critical section, (b) the formability of the material in general, (c) process efficiency, (d) proportional increase in surface area. A method of quantitative assessment is proposed for each of the four aspects of formability. The theoretical study also includes the distinction between coaxial and non-coaxial strains which occur, respectively, in axisymmetrical and unsymmetrical forming processes and the inadequacy of the circular grid system for the assessment of formability is explained in the light of this distinction. (2) Experimental Study As one of the bases of the experimental work, the determination of the end point of a forming process, which sets the limit to the formability of the work material, is discussed. The effects of three process parameters on draw-in are shown graphically. Then the delay of fracture in sheet metal forming resulting from draw-in is analysed in kinematical terms, namely, through the radial displacements, the radial and the circumferential strains, and the projected thickness of the workpiece. Through the equilibrium equation of the membrane stresses, the effect on the shape of the unsupported region of the workpiece, and hence the position of the critical section is explained. Then, the effect of draw-in on the four aspects of formability is discussed throughout this investigation. The triangular coordinate system is used to present and analyse the triaxial strains involved. This coordinate system has the advantage of showing all the three principal strains in a material simultaneously, as well as representing clearly the many types of strains involved in sheet metal work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we have done back to back comparison of quantitive phase and refractive index from a microscopic image of waveguide previously obtained by Allsop et al. Paper also shows microscopic image of the first 3 waveguides from the sample. Tomlins et al. have demonstrated use of femtosecond fabricated artefacts as OCT calibration samples. Here we present the use of femtosecond waveguides, inscribed with optimized parameters, to test and calibrate the sensitivity of the OCT systems.