923 resultados para Measuring methods
Resumo:
In vitro measurements of skin absorption are an increasingly important aspect of regulatory studies, product support claims, and formulation screening. However, such measurements are significantly affected by skin variability. The purpose of this study was to determine inter- and intralaboratory variation in diffusion cell measurements caused by factors other than skin. This was attained through the use of an artificial (silicone rubber) rate-limiting membrane and the provision of materials including a standard penetrant, methyl paraben (MP), and a minimally prescriptive protocol to each of the 18 participating laboratories. Standardized calculations of MP flux were determined from the data submitted by each laboratory by applying a predefined mathematical model. This was deemed necessary to eliminate any interlaboratory variation caused by different methods of flux calculations. Average fluxes of MP calculated and reported by each laboratory (60 +/- 27 mug cm(-2) h(-1), n = 25, range 27-101) were in agreement with the standardized calculations of MP flux (60 +/- 21 mug cm(-2) h(-1), range 19-120). The coefficient of variation between laboratories was approximately 35% and was manifest as a fourfold difference between the lowest and highest average flux values and a sixfold difference between the lowest and highest individual flux values. Intra-laboratory variation was lower, averaging 10% for five individuals using the same equipment within a single laboratory. Further studies should be performed to clarify the exact components responsible for nonskin-related variability in diffusion cell measurements. It is clear that further developments of in vitro methodologies for measuring skin absorption are required. (C) 2005 Wiley-Liss, Inc.
Resumo:
Fouling is the deposition of milk solids on heat transfer sur aces, particularly heat exchangers. It is a major industrial problem, which causes a decrease in heat transfer efficiency and shortens run times. The resultant effect is a decrease in process efficiency and economy. For studying and monitoring deposit formation, suitable fouling detectors or methods of measuring the deposit are required. This can be achieved through direct means, whereby the deposit is analyzed after a certain time, or indirectly through instrumentation for monitoring parameters such as temperature, pressure, flow rate, overall heat transfer coefficient, heat flux, and other physical properties. This article reviews the various reported fouling detection methods.
Resumo:
Objectives: The study was designed to show the validity and reliability of scoring the Physical Mobility Scale (PMS). PMS was developed by physiotherapists working in residential aged care to specifically show resident functional mobility and to provide information regarding each resident's need for supervision or assistance from one or two staff members and equipment during position changes, transfers, mobilising and personal care. Methods: Nineteen physiotherapists of varying backgrounds and experience scored the performances of nine residents of care facilities from video recordings. The performances were compared to scores on two 'gold standard' assessment tools. Four of the physiotherapists repeated the evaluations. Results: The PAIS showed excellent content validity and reliability. Conclusions: The PAIS provides graded performance of physical mobility, including level of dependency on staff and equipment. This is a major advantage over existing functional assessment tools. There is no need for specific training for physiotherapists to use the tool.
Resumo:
Background: Current methods to find significantly under- and over-represented gene ontology (GO) terms in a set of genes consider the genes as equally probable balls in a bag, as may be appropriate for transcripts in micro-array data. However, due to the varying length of genes and intergenic regions, that approach is inappropriate for deciding if any GO terms are correlated with a set of genomic positions. Results: We present an algorithm - GONOME - that can determine which GO terms are significantly associated with a set of genomic positions given a genome annotated with (at least) the starts and ends of genes. We show that certain GO terms may appear to be significantly associated with a set of randomly chosen positions in the human genome if gene lengths are not considered, and that these same terms have been reported as significantly over-represented in a number of recent papers. This apparent over-representation disappears when gene lengths are considered, as GONOME does. For example, we show that, when gene length is taken into account, the term development is not significantly enriched in genes associated with human CpG islands, in contradiction to a previous report. We further demonstrate the efficacy of GONOME by showing that occurrences of the proteosome-associated control element (PACE) upstream activating sequence in the S. cerevisiae genome associate significantly to appropriate GO terms. An extension of this approach yields a whole-genome motif discovery algorithm that allows identification of many other promoter sequences linked to different types of genes, including a large group of previously unknown motifs significantly associated with the terms 'translation' and 'translational elongation'. Conclusion: GONOME is an algorithm that correctly extracts over-represented GO terms from a set of genomic positions. By explicitly considering gene size, GONOME avoids a systematic bias toward GO terms linked to large genes. Inappropriate use of existing algorithms that do not take gene size into account has led to erroneous or suspect conclusions. Reciprocally GONOME may be used to identify new features in genomes that are significantly associated with particular categories of genes.
Resumo:
This paper presents a new method to measure the sinking rates of individual phytoplankton “particles” (cells, chains, colonies, and aggregates) in the laboratory. Conventional particle tracking and high resolution video imaging were used to measure particle sinking rates and particle size. The stabilizing force of a very mild linear salinity gradient (1 ppt over 15 cm) prevented the formation of convection currents in the laboratory settling chamber. Whereas bulk settling methods such as SETCOL provide a single value of sinking rate for a population, this method allows the measurement of sinking rate and particle size for a large number of individual particles or phytoplankton within a population. The method has applications where sinking rates vary within a population, or where sinking rate-size relationships are important. Preliminary data from experiments with both laboratory and field samples of marine phytoplankton are presented here to illustrate the use of the technique, its applications, and limitations. Whereas this paper deals only with sinking phytoplankton, the method is equally valid for positively buoyant species, as well as nonbiological particles.
Resumo:
If a quantity has been measured by two different methods, the degree of agreement between them should not be tested using Pearson’s correlation coefficient ‘r’. Instead the differences between the two methods should be compared with their mean difference using a Bland and Altman plot. Such a plot illustrates the level of agreement between two methods and enables the degree of bias of one method over the other to be calculated and applied if necessary as a correction factor.
Resumo:
The development of abnormal protein aggregates in the form of extracellular plaques and intracellular inclusions is a characteristic feature of many neurodegenerative diseases such as Alzheimer's disease (AD), Creutzfeldt-Jakob disease (CJD) and the fronto-temporal dementias (FTD). An important aspect of a pathological protein aggregate is its spatial topography in the tissue. Lesions may not be randomly distributed within a histological section but exhibit spatial pattern, a departure from randomness either towards regularity or clustering. Information on the spatial pattern of a lesion may be useful in elucidating its pathogenesis and in studying the relationships between different lesions. This article reviews the methods that have been used to study the spatial topography of lesions. These include simple tests of whether the distribution of a lesion departs significantly from random using randomized points or sample fields, and more complex methods that employ grids or transects of contiguous fields and which can detect the intensity of aggregation and the sizes, distribution and spacing of the clusters. The usefulness of these methods in elucidating the pathogenesis of protein aggregates in neurodegenerative disease is discussed.
Resumo:
Correlation and regression are two of the statistical procedures most widely used by optometrists. However, these tests are often misused or interpreted incorrectly, leading to erroneous conclusions from clinical experiments. This review examines the major statistical tests concerned with correlation and regression that are most likely to arise in clinical investigations in optometry. First, the use, interpretation and limitations of Pearson's product moment correlation coefficient are described. Second, the least squares method of fitting a linear regression to data and for testing how well a regression line fits the data are described. Third, the problems of using linear regression methods in observational studies, if there are errors associated in measuring the independent variable and for predicting a new value of Y for a given X, are discussed. Finally, methods for testing whether a non-linear relationship provides a better fit to the data and for comparing two or more regression lines are considered.
Resumo:
Purpose: To develop a model for the global performance measurement of intensive care units (ICUs) and to apply that model to compare the services for quality improvement. Materials and Methods: Analytic hierarchy process, a multiple-attribute decision-making technique, is used in this study to evolve such a model. The steps consisted of identifying the critical success factors for the best performance of an ICU, identifying subfactors that influence the critical factors, comparing them pairwise, deriving their relative importance and ratings, and calculating the cumulative performance according to the attributes of a given ICU. Every step in the model was derived by group discussions, brainstorming, and consensus among intensivists. Results: The model was applied to 3 ICUs, 1 each in Barbados, Trinidad, and India in tertiary care teaching hospitals of similar setting. The cumulative performance rating of the Barbados ICU was 1.17 when compared with that of Trinidad and Indian ICU, which were 0.82 and 0.75, respectively, showing that the Trinidad and Indian ICUs performed 70% and 64% with respect to Barbados ICU. The model also enabled identifying specific areas where the ICUs did not perform well, which helped to improvise those areas. Conclusions: Analytic hierarchy process is a very useful model to measure the global performance of an ICU. © 2005 Elsevier Inc. All rights reserved.
Resumo:
Data Envelopment Analysis (DEA) is one of the most widely used methods in the measurement of the efficiency and productivity of Decision Making Units (DMUs). DEA for a large dataset with many inputs/outputs would require huge computer resources in terms of memory and CPU time. This paper proposes a neural network back-propagation Data Envelopment Analysis to address this problem for the very large scale datasets now emerging in practice. Neural network requirements for computer memory and CPU time are far less than that needed by conventional DEA methods and can therefore be a useful tool in measuring the efficiency of large datasets. Finally, the back-propagation DEA algorithm is applied to five large datasets and compared with the results obtained by conventional DEA.
Resumo:
Histological features visible in thin sections of brain tissue, such as neuronal perikarya, blood vessels, or pathological lesions may exhibit a degree of spatial association or correlation. In neurodegenerative disorders such as AD, Pick's disease, and CJD, information on whether different types of pathological lesion are spatially correlated may be useful in elucidating disease pathogenesis. In the present article the statistical methods available for studying spatial association in histological sections are reviewed. These include tests of interspecific association between two or more histological features using χ2 contingency tables, measurement of 'complete' and 'absolute' association, and more complex methods that use grids of contiguous samples. In addition, the use of correlation matrices and stepwise multiple regression methods are described. The advantages and limitations of each method are reviewed and possible future developments discussed.
Resumo:
PURPOSE: To design and validate a vision-specific quality-of-life assessment tool to be used in a clinical setting to evaluate low-vision rehabilitation strategy and management. METHODS: Previous vision-related questionnaires were assessed by low-vision rehabilitation professionals and patients for relevance and coverage. The 74 items selected were pretested to ensure correct interpretation. One hundred and fifty patients with low vision completed the chosen questions on four occasions to allow the selection of the most appropriate items. The vision-specific quality of life of patients with low vision was compared with that of 70 age-matched and gender-matched patients with normal vision and before and after low-vision rehabilitation in 278 patients. RESULTS: Items that were unreliable, internally inconsistent, redundant, or not relevant were excluded, resulting in the 25-item Low Vision Quality-of-Life Questionnaire (LVQOL). Completion of the LVQOL results in a summed score between 0 (a low quality of life) and 125 (a high quality of life). The LVQOL has a high internal consistency (α = 0.88) and good reliability (0.72). The average LVQOL score for a population with low vision (60.9 ± 25.1) was significantly lower than the average score of those with normal vision (100.3 ± 20.8). Rehabilitation improved the LVQOL score of those with low vision by an average of 6.8 ± 15.6 (17%). CONCLUSIONS: The LVQOL was shown to be an internally consistent, reliable, and fast method for measuring the vision-specific quality of life of the visually impaired in a clinical setting. It is able to quantify the quality of life of those with low vision and is useful in determining the effects of low-vision rehabilitation. Copyright (C) 2000 Elsevier Science Inc.
Resumo:
Objectives: The Competitive Aggressiveness and Anger Scale (CAAS) was developed to measure antecedents of aggression in sport. The critique attacks the CAAS on three points: (1) the definition of aggression in sport adopted, (2) the‘‘one size fits all’’ element in the thinking behind the scale’s development, (3) the nature of the CAAS Anger and Aggressiveness items. The objectives of this response is to address misunderstandings in the critique. Methods: We identified a number of false assumptions that undermine the validity of the critique and attempt to clarify our position with respect to the criticisms made. Results: (1) The CAAS is being criticised for a definition that it did not use. (2) We accepted that the CAAS may not be suitable for everyone in our limitations section and fully accept the limitations of any scale. We have since undertaken a large research project to establish whether the scale is valid across and within specific sports. (3) The fundamental misunderstanding inherent throughout the critique is that the CAAS was designed as a measure of aggression, rather than anger and aggressiveness, rendering the critique of its items redundant. Conclusions: The critique misrepresents the authors of the CAAS and fails to present a coherent argument against its use. We hope to clarify our position here. The evidence to date suggests that the CAAS is a valid measure of anger and aggressiveness in many sports and that these concepts reliably differentiate players who admit unsanctioned aggression from those who do not.
Resumo:
This paper develops two new indices for measuring productivity in multi-input multi-output situations. One index enables the measure of productivity change of a unit over time while the second index makes it possible to compare two units on productivity at the same or different points in time. Productivity in a single input single output context is defined as the ratio of output to input. In multi-input multi-output contexts this ratio is not defined. Instead, one of the methods traditionally used is the Malmquist Index of productivity change over time. This is computed by reference to the distances of the input-output bundles of a production unit at two different points in time from the efficient boundaries corresponding to those two points in time. The indices developed in this paper depart form the use of two different reference boundaries and instead they use a single reference boundary which in a sense is the most efficient boundary observed over two or more successive time periods. We discuss the assumptions which make possible the definition of such a single reference boundary and proceed to develop the two Malmquist-type indices for measuring productivity. One key advantage of using a single reference boundary is that the resulting index values are circular. That is it is possible to use the index values of successive time periods to derive an index value of productivity change over a time period of any length covered by successive index values or vice versa. Further, the use of a single reference boundary makes it possible to construct an index for comparing the productivities of two units either at the same or at two different points in time. This was not possible with the traditional Malmquist Index. We decompose both new indices into components which isolate production unit from industry or comparator unit effects. The components themselves like the indices developed are also circular. The components of the indices drill down to reveal more clearly the performance of each unit over time relative either to itself or to other units. The indices developed and their components are aimed at managers of production units to enable them to diagnose the performance of their units with a view to guiding them to improved performance.
Resumo:
Data Envelopment Analysis (DEA) is a nonparametric method for measuring the efficiency of a set of decision making units such as firms or public sector agencies, first introduced into the operational research and management science literature by Charnes, Cooper, and Rhodes (CCR) [Charnes, A., Cooper, W.W., Rhodes, E., 1978. Measuring the efficiency of decision making units. European Journal of Operational Research 2, 429–444]. The original DEA models were applicable only to technologies characterized by positive inputs/outputs. In subsequent literature there have been various approaches to enable DEA to deal with negative data. In this paper, we propose a semi-oriented radial measure, which permits the presence of variables which can take both negative and positive values. The model is applied to data on a notional effluent processing system to compare the results with those yielded by two alternative methods for dealing with negative data in DEA: The modified slacks-based model suggested by Sharp et al. [Sharp, J.A., Liu, W.B., Meng, W., 2006. A modified slacks-based measure model for data envelopment analysis with ‘natural’ negative outputs and inputs. Journal of Operational Research Society 57 (11) 1–6] and the range directional model developed by Portela et al. [Portela, M.C.A.S., Thanassoulis, E., Simpson, G., 2004. A directional distance approach to deal with negative data in DEA: An application to bank branches. Journal of Operational Research Society 55 (10) 1111–1121]. A further example explores the advantages of using the new model.