966 resultados para quantitative method
Resumo:
Research councils, agencies, and researchers recognize the benefits of team-based health research. However, researchers involved in large-scale team-based research projects face multiple challenges as they seek to identify epistemological and ontological common ground. Typically, these challenges occur between quantitative and qualitative researchers but can occur between qualitative researchers, particularly when the project involves multiple disciplinary perspectives. The authors use the convergent interviewing technique in their multidisciplinary research project to overcome these challenges. This technique assists them in developing common epistemological and ontological ground while enabling swift and detailed data collection and analysis. Although convergent interviewing is a relatively new method described primarily in marketing research, it compares and contrasts well with grounded theory and other techniques. The authors argue that this process provides a rigorous method to structure and refine research projects and requires researchers to identify and be accountable for developing a common epistemological and ontological position.
Resumo:
We have developed a sensitive, non-radioactive method to assess the interaction of transcription factors/DNA-binding proteins with DNA. We have modified the traditional radiolabeled DNA gel mobility shift assay to incorporate a DNA probe end-labeled with a Texas-red fluorophore and a DNA-binding protein tagged with the green fluorescent protein to monitor precisely DNA-protein complexation by native gel electrophoresis. We have applied this method to the DNA-binding proteins telomere release factor-1 and the sex-determining region-Y, demonstrating that the method is sensitive (able to detect 100 fmol of fluorescently labeled DNA), permits direct visualization of both the DNA probe and the DNA-binding protein, and enables quantitative analysis of DNA and protein complexation, and thereby an estimation of the stoichiometry of protein-DNA binding.
Resumo:
Univariate linkage analysis is used routinely to localise genes for human complex traits. Often, many traits are analysed but the significance of linkage for each trait is not corrected for multiple trait testing, which increases the experiment-wise type-I error rate. In addition, univariate analyses do not realise the full power provided by multivariate data sets. Multivariate linkage is the ideal solution but it is computationally intensive, so genome-wide analysis and evaluation of empirical significance are often prohibitive. We describe two simple methods that efficiently alleviate these caveats by combining P-values from multiple univariate linkage analyses. The first method estimates empirical pointwise and genome-wide significance between one trait and one marker when multiple traits have been tested. It is as robust as an appropriate Bonferroni adjustment, with the advantage that no assumptions are required about the number of independent tests performed. The second method estimates the significance of linkage between multiple traits and one marker and, therefore, it can be used to localise regions that harbour pleiotropic quantitative trait loci (QTL). We show that this method has greater power than individual univariate analyses to detect a pleiotropic QTL across different situations. In addition, when traits are moderately correlated and the QTL influences all traits, it can outperform formal multivariate VC analysis. This approach is computationally feasible for any number of traits and was not affected by the residual correlation between traits. We illustrate the utility of our approach with a genome scan of three asthma traits measured in families with a twin proband.
Resumo:
A sensitive quantitative reversed-phase HPLC method is described for measuring bacterial proteolysis and proteinase activity in UHT milk. The analysis is performed on a TCA filtrate of the milk. The optimum concentration of TCA was found to be 4%; at lower concentrations, non-precipitated protein blocked the HPLC while higher concentrations yielded lower amounts of peptides. The method showed greater sensitivity and reproducibility than a fluorescamine-based method. Quantification of the HPLC method was achieved by use of an external dipeptide standard or a standard proteinase. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
The thesis presents a two-dimensional Risk Assessment Method (RAM) where the assessment of risk to the groundwater resources incorporates both the quantification of the probability of the occurrence of contaminant source terms, as well as the assessment of the resultant impacts. The approach emphasizes the need for a greater dependency on the potential pollution sources, rather than the traditional approach where assessment is based mainly on the intrinsic geo-hydrologic parameters. The risk is calculated using Monte Carlo simulation methods whereby random pollution events were generated to the same distribution as historically occurring events or a priori potential probability distribution. Integrated mathematical models then simulate contaminant concentrations at the predefined monitoring points within the aquifer. The spatial and temporal distributions of the concentrations were calculated from repeated realisations, and the number of times when a user defined concentration magnitude was exceeded is quantified as a risk. The method was setup by integrating MODFLOW-2000, MT3DMS and a FORTRAN coded risk model, and automated, using a DOS batch processing file. GIS software was employed in producing the input files and for the presentation of the results. The functionalities of the method, as well as its sensitivities to the model grid sizes, contaminant loading rates, length of stress periods, and the historical frequencies of occurrence of pollution events were evaluated using hypothetical scenarios and a case study. Chloride-related pollution sources were compiled and used as indicative potential contaminant sources for the case study. At any active model cell, if a random generated number is less than the probability of pollution occurrence, then the risk model will generate synthetic contaminant source term as an input into the transport model. The results of the applications of the method are presented in the form of tables, graphs and spatial maps. Varying the model grid sizes indicates no significant effects on the simulated groundwater head. The simulated frequency of daily occurrence of pollution incidents is also independent of the model dimensions. However, the simulated total contaminant mass generated within the aquifer, and the associated volumetric numerical error appear to increase with the increasing grid sizes. Also, the migration of contaminant plume advances faster with the coarse grid sizes as compared to the finer grid sizes. The number of daily contaminant source terms generated and consequently the total mass of contaminant within the aquifer increases in a non linear proportion to the increasing frequency of occurrence of pollution events. The risk of pollution from a number of sources all occurring by chance together was evaluated, and quantitatively presented as risk maps. This capability to combine the risk to a groundwater feature from numerous potential sources of pollution proved to be a great asset to the method, and a large benefit over the contemporary risk and vulnerability methods.
Resumo:
Objective: To quantify the neuronal and glial cell pathology in the hippocampus and the parahippocampal gyrus (PHG) of 8 cases of progressive supranuclear palsy (PSP). Material: tau-immunolabeled sections of the temporal lobe of 8 diagnosed cases of PSP. Method: The densities of lesions were measured in the PHG, CA sectors of the hippocampus and the dentate gyrus (DG) and studied using spatial pattern analysis. Results: Neurofibrillary tangles (NFT) and abnormally enlarged neurons (EN) were most frequent in the PHG and in sector CA1 of the hippocampus, oligodendroglial inclusions (“coiled bodies”) (GI) in the PHG, subiculum, sectors CA1 and CA2, and neuritic plaques (NP) in sectors CA2 and CA4. The DG was the least affected region. Vacuolation and GI were observed in the alveus. No tufted astrocytes (TA) were observed. Pathological changes exhibited clustering, the lesions often exhibiting a regular distribution of the clusters parallel to the tissue boundary. There was a positive correlation between the degree of vacuolation in the alveus and the densities of NFT in CA1 and GI in CA1 and CA2. Conclusion: The pathology most significantly affected the output pathways of the hippocampus, lesions were topographically distributed, and hippocampal pathology may be one factor contributing to cognitive decline in PSP.
Resumo:
A dry matrix application for matrix-assisted laser desorption/ionization mass spectrometry imaging (MALDI MSI) was used to profile the distribution of 4-bromophenyl-1,4-diazabicyclo(3.2.2)nonane-4-carboxylate, monohydrochloride (BDNC, SSR180711) in rat brain tissue sections. Matrix application involved applying layers of finely ground dry alpha-cyano-4-hydroxycinnamic acid (CHCA) to the surface of tissue sections thaw mounted onto MALDI targets. It was not possible to detect the drug when applying matrix in a standard aqueous-organic solvent solution. The drug was detected at higher concentrations in specific regions of the brain, particularly the white matter of the cerebellum. Pseudomultiple reaction monitoring imaging was used to validate that the observed distribution was the target compound. The semiquantitative data obtained from signal intensities in the imaging was confirmed by laser microdissection of specific regions of the brain directed by the imaging, followed by hydrophilic interaction chromatography in combination with a quantitative high-resolution mass spectrometry method. This study illustrates that a dry matrix coating is a valuable and complementary matrix application method for analysis of small polar drugs and metabolites that can be used for semiquantitative analysis.
Resumo:
The software underpinning today’s IT systems needs to adapt dynamically and predictably to rapid changes in system workload, environment and objectives. We describe a software framework that achieves such adaptiveness for IT systems whose components can be modelled as Markov chains. The framework comprises (i) an autonomic architecture that uses Markov-chain quantitative analysis to dynamically adjust the parameters of an IT system in line with its state, environment and objectives; and (ii) a method for developing instances of this architecture for real-world systems. Two case studies are presented that use the framework successfully for the dynamic power management of disk drives, and for the adaptive management of cluster availability within data centres, respectively.
Resumo:
The last decade has seen a considerable increase in the application of quantitative methods in the study of histological sections of brain tissue and especially in the study of neurodegenerative disease. These disorders are characterised by the deposition and aggregation of abnormal or misfolded proteins in the form of extracellular protein deposits such as senile plaques (SP) and intracellular inclusions such as neurofibrillary tangles (NFT). Quantification of brain lesions and studying the relationships between lesions and normal anatomical features of the brain, including neurons, glial cells, and blood vessels, has become an important method of elucidating disease pathogenesis. This review describes methods for quantifying the abundance of a histological feature such as density, frequency, and 'load' and the sampling methods by which quantitative measures can be obtained including plot/quadrat sampling, transect sampling, and the point-quarter method. In addition, methods for determining the spatial pattern of a histological feature, i.e., whether the feature is distributed at random, regularly, or is aggregated into clusters, are described. These methods include the use of the Poisson and binomial distributions, pattern analysis by regression, Fourier analysis, and methods based on mapped point patterns. Finally, the statistical methods available for studying the degree of spatial correlation between pathological lesions and neurons, glial cells, and blood vessels are described.
Resumo:
Objective: Qualitative research is increasingly valued as part of the evidence for policy and practice, but how it should be appraised is contested. Various appraisal methods, including checklists and other structured approaches, have been proposed but rarely evaluated. We aimed to compare three methods for appraising qualitative research papers that were candidates for inclusion in a systematic review of evidence on support for breast-feeding. Method: A sample of 12 research papers on support for breast-feeding was appraised by six qualitative reviewers using three appraisal methods: unprompted judgement, based on expert opinion; a UK Cabinet Office quality framework; and CASP, a Critical Appraisal Skills Programme tool. Papers were assigned, following appraisals, to 1 of 5 categories, which were dichotomized to indicate whether or not papers should be included in a systematic review. Patterns of agreement in categorization of papers were assessed quantitatively using κ statistics, and qualitatively using cross-case analysis. Results: Agreement in categorizing papers across the three methods was slight (κ =0.13; 95% CI 0.06-0.24). Structured approaches did not appear to yield higher agreement than that by unprompted judgement. Qualitative analysis revealed reviewers' dilemmas in deciding between the potential impact of findings and the quality of the research execution or reporting practice. Structured instruments appeared to make reviewers more explicit about the reasons for their judgements. Conclusions: Structured approaches may not produce greater consistency of judgements about whether to include qualitative papers in a systematic review. Future research should address how appraisals of qualitative research should be incorporated in systematic reviews. © The Royal Society of Medicine Press Ltd 2007.
Resumo:
The literature available on submerged arc welding of copper and copper alloys, submerged arc welding with strip electrodes, and related areas has been reviewed in depth. Copper cladding of mild steel substrates by deposition from strip electrodes using the submerged arc welding process has been successful. A wide range of parameters, and several fluxes have been investigated. The range of deposit compositions is 66.4% Cu to 95.7% Cu. The weld beads have been metallographically examined using optical and electron microscopy. Equating weld beads to a thermodynamical equivalent of iron has proven to be an accurate and simplified means of handling quantitative data for multicomponent welds. Empirical equations derived using theoretical considerations characterize the weld bead dimensions as functions of the welding parameters and hence composition. The melting rate for strip electrodes is dependent upon the current-voltage product. Weld nugget size is increased by increased thermal transfer efficiencies resulting from stirring which is current dependent. The presence of Fe2O3 in a flux has been demonstrated to diminish electrode melting rate and drastically increase penetration, making flux choice the prime consideration in cladding operations. A theoretical model for welding with strip electrodes and the submerged arc process is presented.
Resumo:
Objective: To study the density and cross-sectional area of axons in the optic nerve in elderly control subjects and in cases of Alzheimer's disease (AD) using an image analysis system. Methods: Sections of optic nerves from control and AD patients were stained with toluidine blue to reveal axon profiles. Results: The density of axons was reduced in both the center and peripheral portions of the optic nerve in AD compared with control patients. Analysis of axons with different cross-sectional areas suggested a specific loss of the smaller sized axons in AD, i.e., those with areas less that 1.99 μm2. An analysis of axons >11 μm2 in cross-sectional area suggested no specific loss of the larger axons in this group of patients. Conclusions: The data suggest that image analysis provides an accurate and reproducible method of quantifying axons in the optic nerve. In addition, the data suggest that axons are lost throughout the optic nerve with a specific loss of the smaller-sized axons. Loss of the smaller axons may explain the deficits in color vision observed in a significant proportion of patients with AD.
Resumo:
Artifact selection decisions typically involve the selection of one from a number of possible/candidate options (decision alternatives). In order to support such decisions, it is important to identify and recognize relevant key issues of problem solving and decision making (Albers, 1996; Harris, 1998a, 1998b; Jacobs & Holten, 1995; Loch & Conger, 1996; Rumble, 1991; Sauter, 1999; Simon, 1986). Sauter classifies four problem solving/decision making styles: (1) left-brain style, (2) right-brain style, (3) accommodating, and (4) integrated (Sauter, 1999). The left-brain style employs analytical and quantitative techniques and relies on rational and logical reasoning. In an effort to achieve predictability and minimize uncertainty, problems are explicitly defined, solution methods are determined, orderly information searches are conducted, and analysis is increasingly refined. Left-brain style decision making works best when it is possible to predict/control, measure, and quantify all relevant variables, and when information is complete. In direct contrast, right-brain style decision making is based on intuitive techniques—it places more emphasis on feelings than facts. Accommodating decision makers use their non-dominant style when they realize that it will work best in a given situation. Lastly, integrated style decision makers are able to combine the left- and right-brain styles—they use analytical processes to filter information and intuition to contend with uncertainty and complexity.
Resumo:
This work presents a two-dimensional approach of risk assessment method based on the quantification of the probability of the occurrence of contaminant source terms, as well as the assessment of the resultant impacts. The risk is calculated using Monte Carlo simulation methods whereby synthetic contaminant source terms were generated to the same distribution as historically occurring pollution events or a priori potential probability distribution. The spatial and temporal distributions of the generated contaminant concentrations at pre-defined monitoring points within the aquifer were then simulated from repeated realisations using integrated mathematical models. The number of times when user defined ranges of concentration magnitudes were exceeded is quantified as risk. The utilities of the method were demonstrated using hypothetical scenarios, and the risk of pollution from a number of sources all occurring by chance together was evaluated. The results are presented in the form of charts and spatial maps. The generated risk maps show the risk of pollution at each observation borehole, as well as the trends within the study area. This capability to generate synthetic pollution events from numerous potential sources of pollution based on historical frequency of their occurrence proved to be a great asset to the method, and a large benefit over the contemporary methods.
Resumo:
Appealingly simple: A new method is described that allows the diffusion coefficient of a small molecule to be estimated given only the molecular weight and the viscosity of the solvent used. This method makes possible the quantitative interpretation of the diffusion domain of diffusion-ordered NMR spectra (see picture). © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.