908 resultados para High Reliability
Resumo:
Background: In response to the need for more comprehensive quality assessment within Australian residential aged care facilities, the Clinical Care Indicator (CCI) Tool was developed to collect outcome data as a means of making inferences about quality. A national trial of its effectiveness and a Brisbane-based trial of its use within the quality improvement context determined the CCI Tool represented a potentially valuable addition to the Australian aged care system. This document describes the next phase in the CCI Tool.s development; the aims of which were to establish validity and reliability of the CCI Tool, and to develop quality indicator thresholds (benchmarks) for use in Australia. The CCI Tool is now known as the ResCareQA (Residential Care Quality Assessment). Methods: The study aims were achieved through a combination of quantitative data analysis, and expert panel consultations using modified Delphi process. The expert panel consisted of experienced aged care clinicians, managers, and academics; they were initially consulted to determine face and content validity of the ResCareQA, and later to develop thresholds of quality. To analyse its psychometric properties, ResCareQA forms were completed for all residents (N=498) of nine aged care facilities throughout Queensland. Kappa statistics were used to assess inter-rater and test-retest reliability, and Cronbach.s alpha coefficient calculated to determine internal consistency. For concurrent validity, equivalent items on the ResCareQA and the Resident Classification Scales (RCS) were compared using Spearman.s rank order correlations, while discriminative validity was assessed using known-groups technique, comparing ResCareQA results between groups with differing care needs, as well as between male and female residents. Rank-ordered facility results for each clinical care indicator (CCI) were circulated to the panel; upper and lower thresholds for each CCI were nominated by panel members and refined through a Delphi process. These thresholds indicate excellent care at one extreme and questionable care at the other. Results: Minor modifications were made to the assessment, and it was renamed the ResCareQA. Agreement on its content was reached after two Delphi rounds; the final version contains 24 questions across four domains, enabling generation of 36 CCIs. Both test-retest and inter-rater reliability were sound with median kappa values of 0.74 (test-retest) and 0.91 (inter-rater); internal consistency was not as strong, with a Chronbach.s alpha of 0.46. Because the ResCareQA does not provide a single combined score, comparisons for concurrent validity were made with the RCS on an item by item basis, with most resultant correlations being quite low. Discriminative validity analyses, however, revealed highly significant differences in total number of CCIs between high care and low care groups (t199=10.77, p=0.000), while the differences between male and female residents were not significant (t414=0.56, p=0.58). Clinical outcomes varied both within and between facilities; agreed upper and lower thresholds were finalised after three Delphi rounds. Conclusions: The ResCareQA provides a comprehensive, easily administered means of monitoring quality in residential aged care facilities that can be reliably used on multiple occasions. The relatively modest internal consistency score was likely due to the multi-factorial nature of quality, and the absence of an aggregate result for the assessment. Measurement of concurrent validity proved difficult in the absence of a gold standard, but the sound discriminative validity results suggest that the ResCareQA has acceptable validity and could be confidently used as an indication of care quality within Australian residential aged care facilities. The thresholds, while preliminary due to small sample size, enable users to make judgements about quality within and between facilities. Thus it is recommended the ResCareQA be adopted for wider use.
Resumo:
The modern society has come to expect the electrical energy on demand, while many of the facilities in power systems are aging beyond repair and maintenance. The risk of failure is increasing with the aging equipments and can pose serious consequences for continuity of electricity supply. As the equipments used in high voltage power networks are very expensive, economically it may not be feasible to purchase and store spares in a warehouse for extended periods of time. On the other hand, there is normally a significant time before receiving equipment once it is ordered. This situation has created a considerable interest in the evaluation and application of probability methods for aging plant and provisions of spares in bulk supply networks, and can be of particular importance for substations. Quantitative adequacy assessment of substation and sub-transmission power systems is generally done using a contingency enumeration approach which includes the evaluation of contingencies, classification of the contingencies based on selected failure criteria. The problem is very complex because of the need to include detailed modelling and operation of substation and sub-transmission equipment using network flow evaluation and to consider multiple levels of component failures. In this thesis a new model associated with aging equipment is developed to combine the standard tools of random failures, as well as specific model for aging failures. This technique is applied in this thesis to include and examine the impact of aging equipments on system reliability of bulk supply loads and consumers in distribution network for defined range of planning years. The power system risk indices depend on many factors such as the actual physical network configuration and operation, aging conditions of the equipment, and the relevant constraints. The impact and importance of equipment reliability on power system risk indices in a network with aging facilities contains valuable information for utilities to better understand network performance and the weak links in the system. In this thesis, algorithms are developed to measure the contribution of individual equipment to the power system risk indices, as part of the novel risk analysis tool. A new cost worth approach was developed in this thesis that can make an early decision in planning for replacement activities concerning non-repairable aging components, in order to maintain a system reliability performance which economically is acceptable. The concepts, techniques and procedures developed in this thesis are illustrated numerically using published test systems. It is believed that the methods and approaches presented, substantially improve the accuracy of risk predictions by explicit consideration of the effect of equipment entering a period of increased risk of a non-repairable failure.
Resumo:
This paper presents the findings of an investigation of the challenges Australian manufacturers are currently facing. A comprehensive questionnaire survey was conducted among leading Australian manufacturers. This paper reports the main findings of this study. Evidence indicates that product quality and reliability (Q & R) are the main challenges for Australian manufacturers. Design capability and time to market came second. Results show that there is no effective information exchange between the parties involved in production and quality control. Learning from the past mistakes is not proving to have significant effects on improving product quality. The technological innovation speed is high and companies are introducing as many as 5 new products in a year. This technological speed has pressure on the Q & R of new products. To overcome the new challenges, companies need a Q & R improvement model.
Resumo:
Demands for delivering high instantaneous power in a compressed form (pulse shape) have widely increased during recent decades. The flexible shapes with variable pulse specifications offered by pulsed power have made it a practical and effective supply method for an extensive range of applications. In particular, the release of basic subatomic particles (i.e. electron, proton and neutron) in an atom (ionization process) and the synthesizing of molecules to form ions or other molecules are among those reactions that necessitate large amount of instantaneous power. In addition to the decomposition process, there have recently been requests for pulsed power in other areas such as in the combination of molecules (i.e. fusion, material joining), gessoes radiations (i.e. electron beams, laser, and radar), explosions (i.e. concrete recycling), wastewater, exhausted gas, and material surface treatments. These pulses are widely employed in the silent discharge process in all types of materials (including gas, fluid and solid); in some cases, to form the plasma and consequently accelerate the associated process. Due to this fast growing demand for pulsed power in industrial and environmental applications, the exigency of having more efficient and flexible pulse modulators is now receiving greater consideration. Sensitive applications, such as plasma fusion and laser guns also require more precisely produced repetitive pulses with a higher quality. Many research studies are being conducted in different areas that need a flexible pulse modulator to vary pulse features to investigate the influence of these variations on the application. In addition, there is the need to prevent the waste of a considerable amount of energy caused by the arc phenomena that frequently occur after the plasma process. The control over power flow during the supply process is a critical skill that enables the pulse supply to halt the supply process at any stage. Different pulse modulators which utilise different accumulation techniques including Marx Generators (MG), Magnetic Pulse Compressors (MPC), Pulse Forming Networks (PFN) and Multistage Blumlein Lines (MBL) are currently employed to supply a wide range of applications. Gas/Magnetic switching technologies (such as spark gap and hydrogen thyratron) have conventionally been used as switching devices in pulse modulator structures because of their high voltage ratings and considerably low rising times. However, they also suffer from serious drawbacks such as, their low efficiency, reliability and repetition rate, and also their short life span. Being bulky, heavy and expensive are the other disadvantages associated with these devices. Recently developed solid-state switching technology is an appropriate substitution for these switching devices due to the benefits they bring to the pulse supplies. Besides being compact, efficient, reasonable and reliable, and having a long life span, their high frequency switching skill allows repetitive operation of pulsed power supply. The main concerns in using solid-state transistors are the voltage rating and the rising time of available switches that, in some cases, cannot satisfy the application’s requirements. However, there are several power electronics configurations and techniques that make solid-state utilisation feasible for high voltage pulse generation. Therefore, the design and development of novel methods and topologies with higher efficiency and flexibility for pulsed power generators have been considered as the main scope of this research work. This aim is pursued through several innovative proposals that can be classified under the following two principal objectives. • To innovate and develop novel solid-state based topologies for pulsed power generation • To improve available technologies that have the potential to accommodate solid-state technology by revising, reconfiguring and adjusting their structure and control algorithms. The quest to distinguish novel topologies for a proper pulsed power production was begun with a deep and through review of conventional pulse generators and useful power electronics topologies. As a result of this study, it appears that efficiency and flexibility are the most significant demands of plasma applications that have not been met by state-of-the-art methods. Many solid-state based configurations were considered and simulated in order to evaluate their potential to be utilised in the pulsed power area. Parts of this literature review are documented in Chapter 1 of this thesis. Current source topologies demonstrate valuable advantages in supplying the loads with capacitive characteristics such as plasma applications. To investigate the influence of switching transients associated with solid-state devices on rise time of pulses, simulation based studies have been undertaken. A variable current source is considered to pump different current levels to a capacitive load, and it was evident that dissimilar dv/dts are produced at the output. Thereby, transient effects on pulse rising time are denied regarding the evidence acquired from this examination. A detailed report of this study is given in Chapter 6 of this thesis. This study inspired the design of a solid-state based topology that take advantage of both current and voltage sources. A series of switch-resistor-capacitor units at the output splits the produced voltage to lower levels, so it can be shared by the switches. A smart but complicated switching strategy is also designed to discharge the residual energy after each supply cycle. To prevent reverse power flow and to reduce the complexity of the control algorithm in this system, the resistors in common paths of units are substituted with diode rectifiers (switch-diode-capacitor). This modification not only gives the feasibility of stopping the load supply process to the supplier at any stage (and consequently saving energy), but also enables the converter to operate in a two-stroke mode with asymmetrical capacitors. The components’ determination and exchanging energy calculations are accomplished with respect to application specifications and demands. Both topologies were simply modelled and simulation studies have been carried out with the simplified models. Experimental assessments were also executed on implemented hardware and the approaches verified the initial analysis. Reports on details of both converters are thoroughly discussed in Chapters 2 and 3 of the thesis. Conventional MGs have been recently modified to use solid-state transistors (i.e. Insulated gate bipolar transistors) instead of magnetic/gas switching devices. Resistive insulators previously used in their structures are substituted by diode rectifiers to adjust MGs for a proper voltage sharing. However, despite utilizing solid-state technology in MGs configurations, further design and control amendments can still be made to achieve an improved performance with fewer components. Considering a number of charging techniques, resonant phenomenon is adopted in a proposal to charge the capacitors. In addition to charging the capacitors at twice the input voltage, triggering switches at the moment at which the conducted current through switches is zero significantly reduces the switching losses. Another configuration is also introduced in this research for Marx topology based on commutation circuits that use a current source to charge the capacitors. According to this design, diode-capacitor units, each including two Marx stages, are connected in cascade through solid-state devices and aggregate the voltages across the capacitors to produce a high voltage pulse. The polarity of voltage across one capacitor in each unit is reversed in an intermediate mode by connecting the commutation circuit to the capacitor. The insulation of input side from load side is provided in this topology by disconnecting the load from the current source during the supply process. Furthermore, the number of required fast switching devices in both designs is reduced to half of the number used in a conventional MG; they are replaced with slower switches (such as Thyristors) that need simpler driving modules. In addition, the contributing switches in discharging paths are decreased to half; this decrease leads to a reduction in conduction losses. Associated models are simulated, and hardware tests are performed to verify the validity of proposed topologies. Chapters 4, 5 and 7 of the thesis present all relevant analysis and approaches according to these topologies.
Resumo:
Study Design. A sheep study designed to compare the accuracy of static radiographs, dynamic radiographs, and computed tomographic (CT) scans for the assessment of thoracolumbar facet joint fusion as determined by micro-CT scanning. Objective. To determine the accuracy and reliability of conventional imaging techniques in identifying the status of thoracolumbar (T13-L1) facet joint fusion in a sheep model. Summary of Background Data. Plain radiographs are commonly used to determine the integrity of surgical arthrodesis of the thoracolumbar spine. Many previous studies of fusion success have relied solely on postoperative assessment of plain radiographs, a technique lacking sensitivity for pseudarthrosis. CT may be a more reliable technique, but is less well characterized. Methods. Eleven adult sheep were randomized to either attempted arthrodesis using autogenous bone graft and internal fixation (n = 3) or intentional pseudarthrosis (IP) using oxidized cellulose and internal fixation (n = 8). After 6 months, facet joint fusion was assessed by independent observers, using (1) plain static radiography alone, (2) additional dynamic radiographs, and (3) additional reconstructed spiral CT imaging. These assessments were correlated with high-resolution micro-CT imaging to predict the utility of the conventional imaging techniques in the estimation of fusion success. Results. The capacity of plain radiography alone to correctly predict fusion or pseudarthrosis was 43% and was not improved using plain radiography and dynamic radiography with also a 43% accuracy. Adding assessment by reformatted CT imaging to the plain radiography techniques increased the capacity to predict fusion outcome to 86% correctly. The sensitivity, specificity, and accuracy of static radiography were 0.33, 0.55, and 0.43, respectively, those of dynamic radiography were 0.46, 0.40, and 0.43, respectively, and those of radiography plus CT were 0.88, 0.85, and 0.86, respectively. Conclusion. CT-based evaluation correlated most closely with high-resolution micro-CT imaging. Neither plain static nor dynamic radiographs were able to predict fusion outcome accurately. © 2012 Lippincott Williams & Wilkins.
Resumo:
STUDY DESIGN: Controlled laboratory study. OBJECTIVES: To investigate the reliability and concurrent validity of photographic measurements of hallux valgus angle compared to radiographs as the criterion standard. BACKGROUND: Clinical assessment of hallux valgus involves measuring alignment between the first toe and metatarsal on weight-bearing radiographs or visually grading the severity of deformity with categorical scales. Digital photographs offer a noninvasive method of measuring deformity on an exact scale; however, the validity of this technique has not previously been established. METHODS: Thirty-eight subjects (30 female, 8 male) were examined (76 feet, 54 with hallux valgus). Computer software was used to measure hallux valgus angle from digital records of bilateral weight-bearing dorsoplantar foot radiographs and photographs. One examiner measured 76 feet on 2 occasions 2 weeks apart, and a second examiner measured 40 feet on a single occasion. Reliability was investigated by intraclass correlation coefficients and validity by 95% limits of agreement. The Pearson correlation coefficient was also calculated. RESULTS: Intrarater and interrater reliability were very high (intraclass correlation coefficients greater than 0.96) and 95% limits of agreement between photographic and radiographic measurements were acceptable. Measurements from photographs and radiographs were also highly correlated (Pearson r = 0.96). CONCLUSIONS: Digital photographic measurements of hallux valgus angle are reliable and have acceptable validity compared to weight-bearing radiographs. This method provides a convenient and precise tool in assessment of hallux valgus, while avoiding the cost and radiation exposure associated with radiographs.
Resumo:
Reliable ambiguity resolution (AR) is essential to Real-Time Kinematic (RTK) positioning and its applications, since incorrect ambiguity fixing can lead to largely biased positioning solutions. A partial ambiguity fixing technique is developed to improve the reliability of AR, involving partial ambiguity decorrelation (PAD) and partial ambiguity resolution (PAR). Decorrelation transformation could substantially amplify the biases in the phase measurements. The purpose of PAD is to find the optimum trade-off between decorrelation and worst-case bias amplification. The concept of PAR refers to the case where only a subset of the ambiguities can be fixed correctly to their integers in the integer least-squares (ILS) estimation system at high success rates. As a result, RTK solutions can be derived from these integer-fixed phase measurements. This is meaningful provided that the number of reliably resolved phase measurements is sufficiently large for least-square estimation of RTK solutions as well. Considering the GPS constellation alone, partially fixed measurements are often insufficient for positioning. The AR reliability is usually characterised by the AR success rate. In this contribution an AR validation decision matrix is firstly introduced to understand the impact of success rate. Moreover the AR risk probability is included into a more complete evaluation of the AR reliability. We use 16 ambiguity variance-covariance matrices with different levels of success rate to analyse the relation between success rate and AR risk probability. Next, the paper examines during the PAD process, how a bias in one measurement is propagated and amplified onto many others, leading to more than one wrong integer and to affect the success probability. Furthermore, the paper proposes a partial ambiguity fixing procedure with a predefined success rate criterion and ratio-test in the ambiguity validation process. In this paper, the Galileo constellation data is tested with simulated observations. Numerical results from our experiment clearly demonstrate that only when the computed success rate is very high, the AR validation can provide decisions about the correctness of AR which are close to real world, with both low AR risk and false alarm probabilities. The results also indicate that the PAR procedure can automatically chose adequate number of ambiguities to fix at given high-success rate from the multiple constellations instead of fixing all the ambiguities. This is a benefit that multiple GNSS constellations can offer.
Resumo:
Urban transit system performance may be quantified and assessed using transit capacity and productive capacity for planning, design and operational management. Bunker (4) defines important productive performance measures of an individual transit service and transit line. Transit work (p-km) captures transit task performed over distance. Transit productiveness (p-km/h) captures transit work performed over time. This paper applies productive performance with risk assessment to quantify transit system reliability. Theory is developed to monetize transit segment reliability risk on the basis of demonstration Annual Reliability Event rates by transit facility type, segment productiveness, and unit-event severity. A comparative example of peak hour performance of a transit sub-system containing bus-on-street, busway, and rail components in Brisbane, Australia demonstrates through practical application the importance of valuing reliability. Comparison reveals the highest risk segments to be long, highly productive on street bus segments followed by busway (BRT) segments and then rail segments. A transit reliability risk reduction treatment example demonstrates that benefits can be significant and should be incorporated into project evaluation in addition to those of regular travel time savings, reduced emissions and safety improvements. Reliability can be used to identify high risk components of the transit system and draw comparisons between modes both in planning and operations settings, and value improvement scenarios in a project evaluation setting. The methodology can also be applied to inform daily transit system operational management.
Resumo:
Urban transit system performance may be quantified and assessed using transit capacity and productive capacity for planning, design and operational management. Bunker (4) defines important productive performance measures of an individual transit service and transit line. Transit work (p-km) captures transit task performed over distance. Transit productiveness (p-km/h) captures transit work performed over time. This paper applies productive performance with risk assessment to quantify transit system reliability. Theory is developed to monetize transit segment reliability risk on the basis of demonstration Annual Reliability Event rates by transit facility type, segment productiveness, and unit-event severity. A comparative example of peak hour performance of a transit sub-system containing bus-on-street, busway, and rail components in Brisbane, Australia demonstrates through practical application the importance of valuing reliability. Comparison reveals the highest risk segments to be long, highly productive on street bus segments followed by busway (BRT) segments and then rail segments. A transit reliability risk reduction treatment example demonstrates that benefits can be significant and should be incorporated into project evaluation in addition to those of regular travel time savings, reduced emissions and safety improvements. Reliability can be used to identify high risk components of the transit system and draw comparisons between modes both in planning and operations settings, and value improvement scenarios in a project evaluation setting. The methodology can also be applied to inform daily transit system operational management.
Resumo:
Recent research has proposed Neo-Piagetian theory as a useful way of describing the cognitive development of novice programmers. Neo-Piagetian theory may also be a useful way to classify materials used in learning and assessment. If Neo-Piagetian coding of learning resources is to be useful then it is important that practitioners can learn it and apply it reliably. We describe the design of an interactive web-based tutorial for Neo-Piagetian categorization of assessment tasks. We also report an evaluation of the tutorial's effectiveness, in which twenty computer science educators participated. The average classification accuracy of the participants on each of the three Neo-Piagetian stages were 85%, 71% and 78%. Participants also rated their agreement with the expert classifications, and indicated high agreement (91%, 83% and 91% across the three Neo-Piagetian stages). Self-rated confidence in applying Neo-Piagetian theory to classifying programming questions before and after the tutorial were 29% and 75% respectively. Our key contribution is the demonstration of the feasibility of the Neo-Piagetian approach to classifying assessment materials, by demonstrating that it is learnable and can be applied reliably by a group of educators. Our tutorial is freely available as a community resource.
Resumo:
STUDY DESIGN: Reliability and case-control injury study. OBJECTIVES: 1) To determine if a novel device, designed to measure eccentric knee flexors strength via the Nordic hamstring exercise (NHE), displays acceptable test-retest reliability; 2) to determine normative values for eccentric knee flexors strength derived from the device in individuals without a history of hamstring strain injury (HSI) and; 3) to determine if the device could detect weakness in elite athletes with a previous history of unilateral HSI. BACKGROUND: HSIs and reinjuries are the most common cause of lost playing time in a number of sports. Eccentric knee flexors weakness is a major modifiable risk factor for future HSIs, however there is a lack of easily accessible equipment to assess this strength quality. METHODS: Thirty recreationally active males without a history of HSI completed NHEs on the device on 2 separate occasions. Intraclass correlation coefficients (ICCs), typical error (TE), typical error as a co-efficient of variation (%TE), and minimum detectable change at a 95% confidence interval (MDC95) were calculated. Normative strength data were determined using the most reliable measurement. An additional 20 elite athletes with a unilateral history of HSI within the previous 12 months performed NHEs on the device to determine if residual eccentric muscle weakness existed in the previously injured limb. RESULTS: The device displayed high to moderate reliability (ICC = 0.83 to 0.90; TE = 21.7 N to 27.5 N; %TE = 5.8 to 8.5; MDC95 = 76.2 to 60.1 N). Mean±SD normative eccentric flexors strength, based on the uninjured group, was 344.7 ± 61.1 N for the left and 361.2 ± 65.1 N for the right side. The previously injured limbs were 15% weaker than the contralateral uninjured limbs (mean difference = 50.3 N; 95% CI = 25.7 to 74.9N; P < .01), 15% weaker than the normative left limb data (mean difference = 50.0 N; 95% CI = 1.4 to 98.5 N; P = .04) and 18% weaker than the normative right limb data (mean difference = 66.5 N; 95% CI = 18.0 to 115.1 N; P < .01). CONCLUSIONS: The experimental device offers a reliable method to determine eccentric knee flexors strength and strength asymmetry and revealed residual weakness in previously injured elite athletes.
Resumo:
BACKGROUND: Conjunctival ultraviolet autofluorescence (UVAF) photography was developed to detect and characterise pre-clinical sunlight-induced UV damage. The reliability of this measurement and its relationship to outdoor activity are currently unknown. METHODS: 599 people aged 16-85 years in the cross-sectional Norfolk Island Eye Study were included in the validation study. 196 UVAF individual photographs (49 people) and 60 UVAF photographs (15 people) of Norfolk Island Eye Study participants were used for intra- and inter-observer reliability assessment, respectively. Conjunctival UVAF was measured using UV photography. UVAF area was calculated using computerised methods by one grader on two occasions (intra-observer analysis) or two graders (inter-observer analysis). Outdoor activity category, during summer and winter separately, was determined with a UV questionnaire. Total UVAF equalled the area measured in four conjunctival areas (nasal/temporal conjunctiva of right and left eyes). RESULTS: Intra-observer (ρ_c=0.988, 95% CI 0.967 to 0.996, p<0.001), and inter-observer concordance correlation coefficients (ρ_c=0.924, 95% CI 0.870 to 0.956, p<0.001) of total UVAF exceeded 0.900. When grouped according to 10 mm(2) total UVAF increments, intra- and inter-observer reliability was very good (κ=0.81) and good (κ=0.71), respectively. Increasing time outdoors was strongly with increasing total UVAF in summer and winter (p(trend) <0.001). CONCLUSION: Intra- and inter-observer reliability of conjunctival UVAF is high. In this population, UVAF correlates strongly with the authors' survey-based assessment of time spent outdoors.
Resumo:
Background High-risk foot complications such as neuropathy, ischaemia, deformity, infections, ulcers and amputations consume considerable health care resources and typically result from chronic diseases. This study aimed to develop and test the validity and reliability of a Queensland High Risk Foot Form (QHRFF) tool. Methods Phase one involved developing a QHRFF using an existing diabetes high-risk foot tool, literature search, expert panel and several state-wide stakeholder groups. Phase two tested the criterion-related validity along with inter- and intra-rater reliability of the final QHRFF. Three cohorts of patients (n = 94) and four clinicians, representing different levels of expertise, were recruited. Validity was determined by calculating sensitivity, specificity and positive predictive values (PPV). Kappa and intra-class correlation (ICC) statistics were used to establish reliability. Results A QHRFF tool containing 46-items across seven domains was developed and endorsed. The majority of QHRFF items achieved moderate-to-perfect validity (PPV = 0.71 – 1) and reliability (Kappa/ICC = 0.41 – 1). Items with weak validity and/or reliability included those identifying health professionals previously attending the patient, other (non-listed) co-morbidity, previous foot ulcer, foot deformity, optimum offloading and optimum footwear. Conclusions The QHRFF had moderate-to-perfect validity and reliability across the majority of items, particularly identifying individual co-morbidities and foot complications. Items with weak validity or reliability need to be re-defined or removed. Overall, the QHRFF appears to be a valid and reliable tool to assess, collect and measure clinical data pertaining to high-risk foot complications for clinical or research purposes.
Resumo:
The aim of the current study was to examine the dimensions and reliability of a hospital safety climate questionnaire in Chinese health-care practice. To achieve this, a cross-sectional survey of health-care professionals was undertaken at a university teaching hospital in Shandong province, China. Our survey instrument demonstrated very high internal consistency, comparing well with previous research in this field conducted in other countries. Factor analysis highlighted four key dimensions of safety climate, which centred on employee personal protection, employee interactions, safetyrelated housekeeping and time pressures. Overall, this study suggests that hospital safety climate represents an important aspect of health-care practice in contemporary China.
Resumo:
Background The Upper Limb Functional Index (ULFI) is an internationally widely used outcome measure with robust, valid psychometric properties. The purpose of study is to develop and validate a ULFI Spanish-version (ULFI-Sp). Methods A two stage observational study was conducted. The ULFI was cross-culturally adapted to Spanish through double forward and backward translations, the psychometric properties were then validated. Participants (n = 126) with various upper limb conditions of >12 weeks duration completed the ULFI-Sp, QuickDASH and the Euroqol Health Questionnaire 5 Dimensions (EQ-5D-3 L). The full sample determined internal consistency, concurrent criterion validity, construct validity and factor structure; a subgroup (n = 35) determined reliability at seven days. Results The ULFI-Sp demonstrated high internal consistency (α = 0.94) and reliability (r = 0.93). Factor structure was one-dimensional and supported construct validity. Criterion validity with the EQ-5D-3 L was fair and inversely correlated (r = −0.59). The QuickDASH data was unavailable for analysis due to excessive missing responses. Conclusions The ULFI-Sp is a valid upper limb outcome measure with similar psychometric properties to the English language version.