106 resultados para Reliability in automation
Resumo:
The aim of this study was to determine the effect of using video analysis software on the interrater reliability of visual assessments of gait videos in children with cerebral palsy. Two clinicians viewed the same random selection of 20 sagittal and frontal video recordings of 12 children with cerebral palsy routinely acquired during outpatient rehabilitation clinics. Both observers rated these videos in a random sequence for each lower limb using the Observational Gait Scale, once with standard video software and another with video analysis software (Dartfish(®)) which can perform angle and timing measurements. The video analysis software improved interrater agreement, measured by weighted Cohen's kappas, for the total score (κ 0.778→0.809) and all of the items that required angle and/or timing measurements (knee position mid-stance κ 0.344→0.591; hindfoot position mid-stance κ 0.160→0.346; foot contact mid-stance κ 0.700→0.854; timing of heel rise κ 0.769→0.835). The use of video analysis software is an efficient approach to improve the reliability of visual video assessments.
Resumo:
The National Academies has stressed the need to develop quantifiable measures for methods that are currently qualitative in nature, such as the examination of fingerprints. Current protocols and procedures to perform these examinations rely heavily on a succession of subjective decisions, from the initial acceptance of evidence for probative value to the final assessment of forensic results. This project studied the concept of sufficiency associated with the decisions made by latent print examiners at the end of the various phases of the examination process. During this 2-year effort, a web‐based interface was designed to capture the observations of 146 latent print examiners and trainees on 15 pairs of latent/control prints. Two main findings resulted from the study: The concept of sufficiency is driven mainly by the number and spatial relationships between the minutiae observed on the latent and control prints. Data indicate that demographics (training, certification, years of experience) or non‐minutiae based features (such as level 3 features) do not play a major role in examiners' decisions; Significant variability was observed between detecting and interpreting friction ridge features and at all levels of details, as well as for factors that have the potential to influence the examination process, such as degradation, distortion, or influence of the background and the development technique.
Resumo:
Family cohesion and adaptability, as operationalised in the Family Adaptability and Cohesion Scales III (FACES III), are two hypothesised dimensions of family functioning. We tested the properties of a French version of FACES III in school-children (mean age: 13 years; S.D:0.85) recruited from the general population and their parents. Separate confirmatory factor analyses were performed for adolescents and adults. The results of both analyses were compatible with a two-factor structure similar to that proposed by the authors of the original instrument. However, orthogonality between the two factors was only supported in the adult data. Internal reliability estimates were 0.78 and 0.68 in adolescents and 0.82 and 0.65 in adults, for cohesion and adaptability respectively.
Resumo:
Background: General practitioners play a central role in taking deprivation into consideration when caring for patients in primary care. Validated questions to identify deprivation in primary-care practices are still lacking. For both clinical and research purposes, this study therefore aims to develop and validate a standardized instrument measuring both material and social deprivation at an individual level. Methods: The Deprivation in Primary Care Questionnaire (DiPCare-Q) was developed using qualitative and quantitative approaches between 2008 and 2011. A systematic review identified 199 questions related to deprivation. Using judgmental item quality, these were reduced to 38 questions. Two focus groups (primary-care physicians, and primary-care researchers), structured interviews (10 laymen), and think aloud interviews (eight cleaning staff) assured face validity. Item response theory analysis was then used to derive the DiPCare-Q index using data obtained from a random sample of 200 patients who were to complete the questionnaire a second time over the phone. For construct and criterion validity, the final 16 questions were administered to a random sample of 1,898 patients attending one of 47 different private primary-care practices in western Switzerland (validation set) along with questions on subjective social status (subjective SES ladder), education, source of income, welfare status, and subjective poverty. Results: Deprivation was defined in three distinct dimensions (table); material deprivation (eight items), social deprivation (five items) and health deprivation (three items). Item consistency was high in both the derivation (KR20 = 0.827) and the validation set (KR20 = 0.778). The DiPCare-Q index was reliable (ICC = 0.847). For construct validity, we showed the DiPCare-Q index to be correlated to patients' estimation of their position on the subjective SES ladder (rs = 0.539). This position was correlated to both material and social deprivation independently suggesting two separate mechanisms enhancing the feeling of deprivation. Conclusion: The DiPCare-Q is a rapid, reliable and validated instrument useful for measuring both material and social deprivation in primary care. Questions from the DiPCare-Q are easy to use when investigating patients' social history and could improve clinicians' ability to detect underlying social distress related to deprivation.
Resumo:
The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.
Resumo:
BACKGROUND: Excessive drinking is a major problem in Western countries. AUDIT (Alcohol Use Disorders Identification Test) is a 10-item questionnaire developed as a transcultural screening tool to detect excessive alcohol consumption and dependence in primary health care settings. OBJECTIVES: The aim of the study is to validate a French version of the Alcohol Use Disorders Identification Test (AUDIT). METHODS: We conducted a validation cross-sectional study in three French-speaking areas (Paris, Geneva and Lausanne). We examined psychometric properties of AUDIT as its internal consistency, and its capacity to correctly diagnose alcohol abuse or dependence as defined by DSM-IV and to detect hazardous drinking (defined as alcohol intake >30 g pure ethanol per day for men and >20 g of pure ethanol per day for women). We calculated sensitivity, specificity, positive and negative predictive values and Receiver Operator Characteristic curves. Finally, we compared the ability of AUDIT to accurately detect "alcohol abuse/dependence" with that of CAGE and MAST. RESULTS: 1207 patients presenting to outpatient clinics (Switzerland, n = 580) or general practitioners' (France, n = 627) successively completed CAGE, MAST and AUDIT self-administered questionnaires, and were independently interviewed by a trained addiction specialist. AUDIT showed a good capacity to discriminate dependent patients (with AUDIT > or =13 for males, sensitivity 70.1%, specificity 95.2%, PPV 85.7%, NPV 94.7% and for females sensitivity 94.7%, specificity 98.2%, PPV 100%, NPV 99.8%); and hazardous drinkers (with AUDIT > or =7, for males sensitivity 83.5%, specificity 79.9%, PPV 55.0%, NPV 82.7% and with AUDIT > or =6 for females, sensitivity 81.2%, specificity 93.7%, PPV 64.0%, NPV 72.0%). AUDIT gives better results than MAST and CAGE for detecting "Alcohol abuse/dependence" as showed on the comparative ROC curves. CONCLUSIONS: The AUDIT questionnaire remains a good screening instrument for French-speaking primary care.
Resumo:
Introduction Occupational therapists could play an important role in facilitating driving cessation for ageing drivers. This, however, requires an easy-to-learn, standardised on-road evaluation method. This study therefore investigates whether use of P-drive' could be reliably taught to occupational therapists via a short half-day training session. Method Using the English 26-item version of P-drive, two occupational therapists evaluated the driving ability of 24 home-dwelling drivers aged 70 years or over on a standardised on-road route. Experienced driving instructors' on-road, subjective evaluations were then compared with P-drive scores. Results Following a short half-day training session, P-drive was shown to have almost perfect between-rater reliability (ICC2,1=0.950, 95% CI 0.889 to 0.978). Reliability was stable across sessions including the training phase even if occupational therapists seemed to become slightly less severe in their ratings with experience. P-drive's score was related to the driving instructors' subjective evaluations of driving skills in a non-linear manner (R-2=0.445, p=0.021). Conclusion P-drive is a reliable instrument that can easily be taught to occupational therapists and implemented as a way of standardising the on-road driving test.
Resumo:
Four studies investigated the reliability and validity of thin slices of nonverbal behavior from social interactions including (1) how well individual slices of a given behavior predict other slices in the same interaction; (2) how well a slice of a given behavior represents the entirety of that behavior within an interaction; (3) how long a slice is necessary to sufficiently represent the entirety of a behavior within an interaction; (4) which slices best capture the entirety of behavior, across different behaviors; and (5) which behaviors (of six measured behaviors) are best captured by slices. Notable findings included strong reliability and validity for thin slices of gaze and nods, and that a 1.5 min slice from the start of an interaction may adequately represent some behaviors. Results provide useful information to researchers making decisions about slice measurement of behavior.
Resumo:
Automation was introduced many years ago in several diagnostic disciplines such as chemistry, haematology and molecular biology. The first laboratory automation system for clinical bacteriology was released in 2006, and it rapidly proved its value by increasing productivity, allowing a continuous increase in sample volumes despite limited budgets and personnel shortages. Today, two major manufacturers, BD Kiestra and Copan, are commercializing partial or complete laboratory automation systems for bacteriology. The laboratory automation systems are rapidly evolving to provide improved hardware and software solutions to optimize laboratory efficiency. However, the complex parameters of the laboratory and automation systems must be considered to determine the best system for each given laboratory. We address several topics on laboratory automation that may help clinical bacteriologists to understand the particularities and operative modalities of the different systems. We present (a) a comparison of the engineering and technical features of the various elements composing the two different automated systems currently available, (b) the system workflows of partial and complete laboratory automation, which define the basis for laboratory reorganization required to optimize system efficiency, (c) the concept of digital imaging and telebacteriology, (d) the connectivity of laboratory automation to the laboratory information system, (e) the general advantages and disadvantages as well as the expected impacts provided by laboratory automation and (f) the laboratory data required to conduct a workflow assessment to determine the best configuration of an automated system for the laboratory activities and specificities.
Resumo:
A 10-year experience of our automated molecular diagnostic platform that carries out 91 different real-time PCR is described. Progresses and future perspectives in molecular diagnostic microbiology are reviewed: why automation is important; how our platform was implemented; how homemade PCRs were developed; the advantages/disadvantages of homemade PCRs, including the critical aspects of troubleshooting and the need to further reduce the turnaround time for specific samples, at least for defined clinical settings such as emergencies. The future of molecular diagnosis depends on automation, and in a novel perspective, it is time now to fully acknowledge the true contribution of molecular diagnostic and to reconsider the indication for PCR, by also using these tests as first-line assays.
Resumo:
BACKGROUND: The WOSI (Western Ontario Shoulder Instability Index) is a self-administered quality of life questionnaire designed to be used as a primary outcome measure in clinical trials on shoulder instability, as well as to measure the effect of an intervention on any particular patient. It is validated and is reliable and sensitive. As it is designed to measure subjective outcome, it is important that translation should be methodologically rigorous, as it is subject to both linguistic and cultural interpretation. OBJECTIVE: To produce a French language version of the WOSI that is culturally adapted to both European and North American French-speaking populations. MATERIALS AND METHODS: A validated protocol was used to create a French language WOSI questionnaire (WOSI-Fr) that would be culturally acceptable for both European and North American French-speaking populations. Reliability and responsiveness analyses were carried out, and the WOSI-Fr was compared to the F-QuickDASH-D/S (Disability of the Arm, Shoulder and Hand-French translation), and Walch-Duplay scores. RESULTS: A French language version of the WOSI (WOSI-Fr) was accepted by a multinational committee. The WOSI-Fr was then validated using a total of 144 native French-speaking subjects from Canada and Switzerland. Comparison of results on two WOSI-Fr questionnaires completed at a mean interval of 16 days showed that the WOSI-Fr had strong reliability, with a Pearson and interclass correlation of r=0.85 (P=0.01) and ICC=0.84 [95% CI=0.78-0.88]. Responsiveness, at a mean 378.9 days after surgical intervention, showed strong correlation with that of the F-QuickDASH-D/S, with r=0.67 (P<0.01). Moreover, a standardized response means analysis to calculate effect size for both the WOSI-Fr and the F-QuickDASH-D/S showed that the WOSI-Fr had a significantly greater ability to detect change (SRM 1.55 versus 0.87 for the WOSI-Fr and F-QuickDASH-D/S respectively, P<0.01). The WOSI-Fr showed fair correlation with the Walch-Duplay. DISCUSSION: A French-language translation of the WOSI questionnaire was created and validated for use in both Canadian and Swiss French-speaking populations. This questionnaire will facilitate outcome assessment in French-speaking settings, collaboration in multinational studies and comparison between studies performed in different countries. TYPE OF STUDY: Multicenter cohort study. LEVEL OF EVIDENCE: II.
Resumo:
INTRODUCTION: EORTC trial 22991 was designed to evaluate the addition of concomitant and adjuvant short-term hormonal treatments to curative radiotherapy in terms of disease-free survival for patients with intermediate risk localized prostate cancer. In order to assess the compliance to the 3D conformal radiotherapy protocol guidelines, all participating centres were requested to participate in a dummy run procedure. An individual case review was performed for the largest recruiting centres as well. MATERIALS AND METHODS: CT-data of an eligible prostate cancer patient were sent to 30 centres including a description of the clinical case. The investigator was requested to delineate the volumes of interest and to perform treatment planning according to the protocol. Thereafter, the investigators of the 12 most actively recruiting centres were requested to provide data on five randomly selected patients for an individual case review. RESULTS: Volume delineation varied significantly between investigators. Dose constraints for organs at risk (rectum, bladder, hips) were difficult to meet. In the individual case review, no major protocol deviations were observed, but a number of dose reporting problems were documented for centres using IMRT. CONCLUSIONS: Overall, results of this quality assurance program were satisfactory. The efficacy of the combination of a dummy run procedure with an individual case review is confirmed in this study, as none of the evaluated patient files harboured a major protocol deviation. Quality assurance remains a very important tool in radiotherapy to increase the reliability of the trial results. Special attention should be given when designing quality assurance programs for more complex irradiation techniques.