948 resultados para Data reliability
Resumo:
Purpose To assess the psychometric properties of the Simplified Therapeutic Intervention Scoring System (TISS 28) scale. Materials and Methods A prospective observational design was used. Patients were recruited from a medical-surgical intensive care unit (ICU) and 4 rehabilitation wards of 2 university-affiliated hospitals in Hong Kong. Results Data necessary for the calculation of the TISS 28, the Therapeutic Intervention Scoring System (TISS 76), and severity of illness scoring system (Simplified Acute Physiology Score [SAPS II]) were recorded for each patient during the first 24 hours after his/her admission to an ICU. A significant positive correlation was found between the TISS 76 and the TISS 28 scores as well as the TISS 28 and the SAPS II scores. There was a significant difference between the TISS 28 scores among ICU patients and patients in rehabilitation wards. A significant correlation was found between the TISS 28 scores of the first and second set of TISS 28 scores. Conclusions Although the findings supported the validity and reliability of the TISS 28, there were limitations of the TISS 28 in measuring nursing workload in ICUs. Hence, continued amendment and validation of the TISS 28 on larger samples in different ICUs would be required so as to provide clinical nurses with a valid and reliable assessment of nursing workload.
Resumo:
Background and Purpose. Arm lymphedema following breast cancer In this study, we assessed the surgery is a continuing problem. reliability and validity of circumferential measurements and water displacement for measuring upper-limb volume. Subjects. Participants included subjects who had had breast cancer surgery, including axillary dissection-19 with and 22 without a diagnosis of arm lymphedema-and 25 control subjects. Methods. Two raters measured each subject by using circumferential tape measurements at specified distances from the fingertips and in relation to anatornic landmarks and by using water displacement. Interrater reliability was calculated by analysis of variance and multilevel modeling. Volumes from circumferential measurements were compared with those from water displacement by use of means and correlation coefficients, respectively. The standard error of measurement, minimum detectable change (MDC), and limits of agreement (LOA) for volumes also were calculated. Results. Arm volumes obtained with these methods had high reliability. Compared with volumes from water displacement, volumes from circumferential measurements had high validity, although these volumes were slightly larger. Expected differences between subjects with and without clinical lymphedema following breast cancer were found. The MDC of volumes or the error associated with a single measure for data based oil anatomic landmarks was lower than that based oil distance from fingertips. The mean LOA with water displacement were lower for data based on anatomic landmarks than for data based on distance from fingertips. Discussion and Conclusion. Volumes calculated from anatomic landmarks are reliable, valid, and more accurate than those obtained from circumferential measurements based on distance from fingertips.
Resumo:
Purpose: This study was conducted to examine the test-retest reliability of a measure of prediagnosis physical activity participation administered to colorecial cancer survivors recruited from a population-based state cancer registry. Methods: A total of 112 participants completed two telephone interviews. I month apart, reporting usual weekly physical activity in the year before their cancer diagnosis. Intraclass correlation coefficients (ICC) and standard en-or of measurement (SEM) were used to describe the test-retest reliability of the measure across the sample: the Bland-Altman approach was used to describe reliability at the individual level. The test-retest reliability for categorized total physical activity (active, insufficiently active, sedentary) was assessed using the kappa statistic. Results: When the complete sample was considered, the ICC ranged from 0.40 (95% Cl: 0.24, 0.55) for vigorous gardening to 0.77 (95% Cl: 0.68, 0.84) for moderate physical activity. The SEM, however, were large. indicating high measurement error. The Bland-Altman plots indicated that the reproducibility of data decreases as the aniount of physical activity reported each week increases The kappa coefficient for the categorized data was 0.62 (95% Cl: 0.48, 0.76). Conclusion: Overall. the results indicated low levels of repeatability for this measure of historical physical activity. Categorizing participants as active, insufficiently active, or sedentary provides a higher level of test-retest reliability.
Resumo:
Background. We describe the development, reliability and applications of the Diagnostic Interview for Psychoses (DIP), a comprehensive interview schedule for psychotic disorders. Method. The DIP is intended for use by interviewers with a clinical background and was designed to occupy the middle ground between fully structured, lay-administered schedules, and semi-structured., psychiatrist-administered interviews. It encompasses four main domains: (a) demographic data; (b) social functioning and disability; (c) a diagnostic module comprising symptoms, signs and past history ratings; and (d) patterns of service utilization Lind patient-perceived need for services. It generates diagnoses according to several sets of criteria using the OPCRIT computerized diagnostic algorithm and can be administered either on-screen or in a hard-copy format. Results. The DIP proved easy to use and was well accepted in the field. For the diagnostic module, inter-rater reliability was assessed on 20 cases rated by 24 clinicians: good reliability was demonstrated for both ICD-10 and DSM-III-R diagnoses. Seven cases were interviewed 2-11 weeks apart to determine test-retest reliability, with pairwise agreement of 0.8-1.0 for most items. Diagnostic validity was assessed in 10 cases, interviewed with the DIP and using the SCAN as 'gold standard': in nine cases clinical diagnoses were in agreement. Conclusions. The DIP is suitable for use in large-scale epidemiological studies of psychotic disorders. as well as in smaller Studies where time is at a premium. While the diagnostic module stands on its own, the full DIP schedule, covering demography, social functioning and service utilization makes it a versatile multi-purpose tool.
Resumo:
Background/aims Macular pigment is thought to protect the macula against exposure to light and oxidative stress, both of which may play a role in the development of age-related macular degeneration. The aim was to clinically evaluate a novel cathode-ray-tube-based method for measurement of macular pigment optical density (MPOD) known as apparent motion photometry (AMP). Methods The authors took repeat readings of MPOD centrally (0°) and at 3° eccentricity for 76 healthy subjects (mean (±SD) 26.5±13.2 years, range 18–74 years). Results The overall mean MPOD for the cohort was 0.50±0.24 at 0°, and 0.28±0.20 at 3° eccentricity; these values were significantly different (t=-8.905, p<0.001). The coefficients of repeatability were 0.60 and 0.48 for the 0 and 3° measurements respectively. Conclusions The data suggest that when the same operator is taking repeated 0° AMP MPOD readings over time, only changes of more than 0.60 units can be classed as clinically significant. In other words, AMP is not suitable for monitoring changes in MPOD over time, as increases of this magnitude would not be expected, even in response to dietary modification or nutritional supplementation.
Resumo:
Impressions about product quality and reliability can depend as much on perceptions about brands and country of origin as on data regarding performance and failure. This has implications for companies in developing countries that need to compete with importers. For manufacturers in industrialised countries it has implications for the value of transferred technologies. This article considers the issue of quality and reliability when technology is transferred between countries with different levels of development. It is based on UK and Chinese company case studies and questionnaire surveys undertaken among three company groups: UK manufacturers; Chinese manufacturers; Chinese users. Results show that all three groups recognise quality and reliability as important and support the premise that foreign technology based machines made in China carry a price premium over Chinese machines based on local technology. Closer examination reveals a number of important differences concerning the perceptions and reality of quality and reliability between the groups.
Resumo:
Research on production systems design has in recent years tended to concentrate on ‘software’ factors such as organisational aspects, work design, and the planning of the production operations. In contrast, relatively little attention has been paid to maximising the contributions made by fixed assets, particularly machines and equipment. However, as the cost of unproductive machine time has increased, reliability, particularly of machine tools, has become ever more important. Reliability theory and research has traditionally been based in the main on electrical and electronic equipment whereas mechanical devices, especially machine tools, have not received sufficiently objective treatment. A recently completed research project has considered the reliability of machine tools by taking sample surveys of purchasers, maintainers and manufacturers. Breakdown data were also collected from a number of engineering companies and analysed using both manual and computer techniques. Results obtained have provided an indication of those factors most likely to influence reliability and which in turn could lead to improved design and selection of machine tool systems. Statistical analysis of long-term field data has revealed patterns of trends of failure which could help in the design of more meaningful maintenance schemes.
Resumo:
Background: Age-related macular degeneration (ARMD) is the leading cause of visual disability in people over 60 years of age in the developed world. The success of treatment deteriorates with increased latency of diagnosis. The purpose of this study was to determine the reliability of the macular mapping test (MMT), and to investigate its potential as a screening tool. Methods: The study population comprised of 31 healthy eyes of 31 participants. To assess reliability, four macular mapping test (MMT) measurements were taken in two sessions separated by one hour by two practitioners, with reversal of order in the second session. MMT readings were also taken from 17 age-related maculopathy (ARM), and 12 AMD affected eyes. Results: For the normal cohort, average MMT scores ranged from 85.5 to 100.0 MMT points. Scores ranged from 79.0 to 99.0 for the ARM group and from 9.0 to 92.0 for the AMD group. MMT scores were reliable to within ± 7.0 points. The difference between AMD affected eyes and controls (z = 3.761, p = < 0.001) was significant. The difference between ARM affected eyes and controls was not significant (z = -0.216, p = 0.829). Conclusion: The reliability data shows that a change of 14 points or more is required to indicate a clinically significant change. This value is required for use of the MMT as an outcome measure in clinical trials. Although there was no difference between MMT scores from ARM affected eyes and controls, the MMT has the advantage over the Amsler grid in that it uses a letter target, has a peripheral fixation aid, and it provides a numerical score. This score could be beneficial in office and home monitoring of AMD progression, as well as an outcome measure in clinical research. © 2005 Bartlett et al; licensee BioMed Central Ltd.
Resumo:
1. Pearson's correlation coefficient only tests whether the data fit a linear model. With large numbers of observations, quite small values of r become significant and the X variable may only account for a minute proportion of the variance in Y. Hence, the value of r squared should always be calculated and included in a discussion of the significance of r. 2. The use of r assumes that a bivariate normal distribution is present and this assumption should be examined prior to the study. If Pearson's r is not appropriate, then a non-parametric correlation coefficient such as Spearman's rs may be used. 3. A significant correlation should not be interpreted as indicating causation especially in observational studies in which there is a high probability that the two variables are correlated because of their mutual correlations with other variables. 4. In studies of measurement error, there are problems in using r as a test of reliability and the ‘intra-class correlation coefficient’ should be used as an alternative. A correlation test provides only limited information as to the relationship between two variables. Fitting a regression line to the data using the method known as ‘least square’ provides much more information and the methods of regression and their application in optometry will be discussed in the next article.
Resumo:
Fault tree analysis is used as a tool within hazard and operability (Hazop) studies. The present study proposes a new methodology for obtaining the exact TOP event probability of coherent fault trees. The technique uses a top-down approach similar to that of FATRAM. This new Fault Tree Disjoint Reduction Algorithm resolves all the intermediate events in the tree except OR gates with basic event inputs so that a near minimal cut sets expression is obtained. Then Bennetts' disjoint technique is applied and remaining OR gates are resolved. The technique has been found to be appropriate as an alternative to Monte Carlo simulation methods when rare events are countered and exact results are needed. The algorithm has been developed in FORTRAN 77 on the Perq workstation as an addition to the Aston Hazop package. The Perq graphical environment enabled a friendly user interface to be created. The total package takes as its input cause and symptom equations using Lihou's form of coding and produces both drawings of fault trees and the Boolean sum of products expression into which reliability data can be substituted directly.
Resumo:
This research was concerned with identifying factors which may influence human reliability within chemical process plants - these factors are referred to as Performance Shaping Factors (PSFs). Following a period of familiarization within the industry, a number of case studies were undertaken covering a range of basic influencing factors. Plant records and site `lost time incident reports' were also used as supporting evidence for identifying and classifying PSFs. In parallel to the investigative research, the available literature appertaining to human reliability assessment and PSFs was considered in relation to the chemical process plan environment. As a direct result of this work, a PSF classification structure has been produced with an accompanying detailed listing. Phase two of the research considered the identification of important individual PSFs for specific situations. Based on the experience and data gained during phase one, it emerged that certain generic features of a task influenced PSF relevance. This led to the establishment of a finite set of generic task groups and response types. Similarly, certain PSFs influence some human errors more than others. The result was a set of error type key words, plus the identification and classification of error causes with their underlying error mechanisms. By linking all these aspects together, a comprehensive methodology has been forwarded as the basis of a computerized aid for system designers. To recapitulate, the major results of this research have been: One, the development of a comprehensive PSF listing specifically for the chemical process industries with a classification structure that facilitates future updates; and two, a model of identifying relevant SPFs and their order of priority. Future requirements are the evaluation of the PSF listing and the identification method. The latter must be considered both in terms of `useability' and its success as a design enhancer, in terms of an observable reduction in important human errors.
Resumo:
In the face of global population growth and the uneven distribution of water supply, a better knowledge of the spatial and temporal distribution of surface water resources is critical. Remote sensing provides a synoptic view of ongoing processes, which addresses the intricate nature of water surfaces and allows an assessment of the pressures placed on aquatic ecosystems. However, the main challenge in identifying water surfaces from remotely sensed data is the high variability of spectral signatures, both in space and time. In the last 10 years only a few operational methods have been proposed to map or monitor surface water at continental or global scale, and each of them show limitations. The objective of this study is to develop and demonstrate the adequacy of a generic multi-temporal and multi-spectral image analysis method to detect water surfaces automatically, and to monitor them in near-real-time. The proposed approach, based on a transformation of the RGB color space into HSV, provides dynamic information at the continental scale. The validation of the algorithm showed very few omission errors and no commission errors. It demonstrates the ability of the proposed algorithm to perform as effectively as human interpretation of the images. The validation of the permanent water surface product with an independent dataset derived from high resolution imagery, showed an accuracy of 91.5% and few commission errors. Potential applications of the proposed method have been identified and discussed. The methodology that has been developed 27 is generic: it can be applied to sensors with similar bands with good reliability, and minimal effort. Moreover, this experiment at continental scale showed that the methodology is efficient for a large range of environmental conditions. Additional preliminary tests over other continents indicate that the proposed methodology could also be applied at the global scale without too many difficulties
Resumo:
Descriptions of vegetation communities are often based on vague semantic terms describing species presence and dominance. For this reason, some researchers advocate the use of fuzzy sets in the statistical classification of plant species data into communities. In this study, spatially referenced vegetation abundance values collected from Greek phrygana were analysed by ordination (DECORANA), and classified on the resulting axes using fuzzy c-means to yield a point data-set representing local memberships in characteristic plant communities. The fuzzy clusters matched vegetation communities noted in the field, which tended to grade into one another, rather than occupying discrete patches. The fuzzy set representation of the community exploited the strengths of detrended correspondence analysis while retaining richer information than a TWINSPAN classification of the same data. Thus, in the absence of phytosociological benchmarks, meaningful and manageable habitat information could be derived from complex, multivariate species data. We also analysed the influence of the reliability of different surveyors' field observations by multiple sampling at a selected sample location. We show that the impact of surveyor error was more severe in the Boolean than the fuzzy classification. © 2007 Springer.
Resumo:
The annealing properties of Type IA Bragg gratings are investigated and compared with Type I and Type IIA Bragg gratings. The transmission properties (mean and modulated wavelength components) of gratings held at predetermined temperatures are recorded from which decay characteristics are inferred. Our data show critical results concerning the high temperature stability of Type IA gratings, as they undergo a drastic initial decay at 100°C, with a consequent mean index change that is severely reduced at this temperature However, the modulated index change of IA gratings remains stable at lower annealing temperatures of 80°C, and the mean index change decays at a comparable rate to Type I gratings at 80°C. Extending this work to include the thermal decay of Type IA gratings inscribed under strain shows that the application of strain quite dramatically transforms the temperature characteristics of the Type IA grating, modifying the temperature coefficient and annealing curves, with the grating showing a remarkable improvement in high temperature stability, leading to a robust grating that can survive temperatures exceeding 180°C. Under conditions of inscription under strain it is found that the temperature coefficient increases, but is maintained at a value considerably different to the Type I grating. Therefore, the combination of Type I and IA (strained) gratings make it possible to decouple temperature and strain over larger temperature excursions.
Resumo:
The software architecture and development consideration for open metadata extraction and processing framework are outlined. Special attention is paid to the aspects of reliability and fault tolerance. Grid infrastructure is shown as useful backend for general-purpose task.