979 resultados para reliability testing
Resumo:
The job of health professionals, including nurses, is considered inherently stressful (Lee & Wang, 2002; Rutledge et al., 2009), and thus it is important to improve and develop specific measures that are sensitive to the demands that health professionals face. This study analysed the psychometric properties of three instruments that focus on the professional experiences of nurses in aspects related to occupational stress, cognitive appraisal, and mental health issues. The evaluation protocol included the Stress Questionnaire for Health Professionals (SQHP; Gomes, 2014), the Cognitive Appraisal Scale (CAS; Gomes, Faria, & Gonçalves, 2013), and the General Health Questionnaire-12 (GHQ-12; Goldberg, 1972). Validity and reliability issues were considered with statistical analysis (i.e. confirmatory factor analysis, convergent validity, and composite reliability) that revealed adequate values for all of the instruments, namely, a six-factor structure for the SQHP, a five-factor structure for the CAS, and a two-factor structure for the GHQ-12. In conclusion, this study proposes three consistent instruments that may be useful for analysing nurses’ adaptation to work contexts.
Resumo:
The research described in this thesis has been developed as a part of the Reliability and Field Data Management for Multi-component Products (REFIDAM) Project. This project was founded under the Applied Research Grants Scheme administered by Enterprise Ireland. The project was a partnership between Galway-Mayo Institute of Technology and Thermo King Europe. The project aimed to develop a system in order to manage the information required for reliability assessment and improvement of multi-component products, by establishing information flows within the company and information exchange with fleet users.
Resumo:
The National Institute of Mental Health developed the semi-structured Diagnostic Interview for Genetic Studies (DIGS) for the assessment of major mood and psychotic disorders and their spectrum conditions. The DIGS was translated into French in a collaborative effort of investigators from sites in France and Switzerland. Inter-rater and test-retest reliability of the French version have been established in a clinical sample in Lausanne. Excellent inter-rater reliability was found for schizophrenia, bipolar disorder, major depression, and unipolar schizoaffective disorder while fair inter-rater reliability was demonstrated for bipolar schizoaffective disorder. Using a six-week test-retest interval, reliability for all diagnoses was found to be fair to good with the exception of bipolar schizoaffective disorder. The lower test-retest reliability was the result of a relatively long test-retest interval that favored incomplete symptom recall. In order to increase reliability for lifetime diagnoses in persons not currently affected, best-estimate procedures using additional sources of diagnostic information such as medical records and reports from relatives should supplement DIGS information in family-genetic studies. Within such a procedure, the DIGS appears to be a useful part of data collection for genetic studies on major mood disorders and schizophrenia in French-speaking populations.
Resumo:
Crizotinib is a first-in-class oral anaplastic lymphoma kinase (ALK) inhibitor targeting ALK-rearranged non-small-cell lung cancer. The therapy was approved by the US FDA in August 2011 and received conditional marketing approval by the European Commission in October 2012 for advanced non-small-cell lung cancer. A break-apart FISH-based assay was jointly approved with crizotinib by the FDA. This assay and an immunohistochemistry assay that uses a D5F3 rabbit monoclonal primary antibody were also approved for marketing in Europe in October 2012. While ALK rearrangement has relatively low prevalence, a clinical benefit is exhibited in more than 85% of patients with median progression-free survival of 8-10 months. In this article, the authors summarize the therapy and alternative test strategies for identifying patients who are likely to respond to therapy, including key issues for effective and efficient testing. The key economic considerations regarding the joint companion diagnostic and therapy are also presented. Given the observed clinical benefit and relatively high cost of crizotinib therapy, companion diagnostics should be evaluated relative to response to therapy versus correlation alone whenever possible, and both high inter-rater reliability and external quality assessment programs are warranted.
Resumo:
BACKGROUND This study assesses the validity and reliability of the Spanish version of DN4 questionnaire as a tool for differential diagnosis of pain syndromes associated to a neuropathic (NP) or somatic component (non-neuropathic pain, NNP). METHODS A study was conducted consisting of two phases: cultural adaptation into the Spanish language by means of conceptual equivalence, including forward and backward translations in duplicate and cognitive debriefing, and testing of psychometric properties in patients with NP (peripheral, central and mixed) and NNP. The analysis of psychometric properties included reliability (internal consistency, inter-rater agreement and test-retest reliability) and validity (ROC curve analysis, agreement with the reference diagnosis and determination of sensitivity, specificity, and positive and negative predictive values in different subsamples according to type of NP). RESULTS A sample of 164 subjects (99 women, 60.4%; age: 60.4 +/- 16.0 years), 94 (57.3%) with NP (36 with peripheral, 32 with central, and 26 with mixed pain) and 70 with NNP was enrolled. The questionnaire was reliable [Cronbach's alpha coefficient: 0.71, inter-rater agreement coefficient: 0.80 (0.71-0.89), and test-retest intra-class correlation coefficient: 0.95 (0.92-0.97)] and valid for a cut-off value > or = 4 points, which was the best value to discriminate between NP and NNP subjects. DISCUSSION This study, representing the first validation of the DN4 questionnaire into another language different than the original, not only supported its high discriminatory value for identification of neuropathic pain, but also provided supplemental psychometric validation (i.e. test-retest reliability, influence of educational level and pain intensity) and showed its validity in mixed pain syndromes.
Resumo:
Decline in gait stability has been associated with increased fall risk in older adults. Reliable and clinically feasible methods of gait instability assessment are needed. This study evaluated the relative and absolute reliability and concurrent validity of the testing procedure of the clinical version of the Narrow Path Walking Test (NPWT) under single task (ST) and dual task (DT) conditions. Thirty independent community-dwelling older adults (65-87 years) were tested twice. Participants were instructed to walk within the 6-m narrow path without stepping out. Trial time, number of steps, trial velocity, number of step errors, and number of cognitive task errors were determined. Intraclass correlation coefficients (ICCs) were calculated as indices of agreement, and a graphic approach called "mountain plot" was applied to help interpret the direction and magnitude of disagreements between testing procedures. Smallest detectable change and smallest real difference (SRD) were computed to determine clinically relevant improvement at group and individual levels, respectively. Concurrent validity was assessed using Performance Oriented Mobility Assessment Tool (POMA) and the Short Physical Performance Battery (SPPB). Test-retest agreement (ICC1,2) varied from 0.77 to 0.92 in ST and from 0.78 to 0.92 in DT conditions, with no apparent systematic differences between testing procedures demonstrated by the mountain plot graphs. Smallest detectable change and smallest real change were small for motor task performance and larger for cognitive errors. Significant correlations were observed for trial velocity and trial time with POMA and SPPB. The present results indicate that the NPWT testing procedure is highly reliable and reproducible.
Resumo:
Introduction: Carbon monoxide (CO) poisoning is one of the mostcommon causes of fatal poisoning. Symptoms of CO poisoning arenonspecific and the documentation of elevated carboxyhemoglobin(HbCO) levels in arterial blood sample is the only standard ofconfirming suspected exposure. The treatment of CO poisoning requiresnormobaric or hyperbaric oxygen therapy, according to the symptomsand HbCO levels. A new device, the Rad-57 pulse CO-oximeter allowsnoninvasive transcutaneous measurement of blood carboxyhemoglobinlevel (SpCO) by measurement of light wavelength absorptions.Methods: Prospective cohort study with a sample of patients, admittedbetween October 2008 - March 2009 and October 2009 - March 2010,in the emergency services (ES) of a Swiss regional hospital and aSwiss university hospital (Burn Center). In case of suspected COpoisoning, three successive noninvasive measurements wereperformed, simultaneously with one arterial blood HbCO test. A controlgroup includes patients admitted in the ES for other complaints (cardiacinsufficiency, respiratory distress, acute renal failure), but necessitatingarterial blood testing. Informed consent was obtained from all patients.The primary endpoint was to assess the agreement of themeasurements made by the Rad-57 (SpCO) and the blood levels(HbCO).Results: 50 patients were enrolled, among whom 32 were admittedfor suspected CO poisoning. Baseline demographic and clinicalcharacteristics of patients are presented in table 1. The median age was37.7 ans ± 11.8, 56% being male. Median laboratory carboxyhemoglobinlevels (HbCO) were 4.25% (95% IC 0.6-28.5) for intoxicated patientsand 1.8% (95% IC 1.0-5.3) for control patients. Only five patientspresented with HbCO levels >= 15%. The results disclose relatively faircorrelations between the SpCO levels obtained by the Rad-57 and thestandard HbCO, without any false negative results. However, theRad-57 tend to under-estimate the value of SpCO for patientsintoxicated HbCO levels >10% (fig. 1).Conclusion: Noninvasive transcutaneous measurement of bloodcarboxyhemoglobin level is easy to use. The correlation seems to becorrect for low to moderate levels (<15%). For higher values, weobserve a trend of the Rad-57 to under-estimate the HbCO levels. Apartfrom this potential limitation and a few cases of false-negative resultsdescribed in the literature, the Rad-57 may be useful for initial triageand diagnosis of CO.
Resumo:
There is a need for more efficient methods giving insight into the complex mechanisms of neurotoxicity. Testing strategies including in vitro methods have been proposed to comply with this requirement. With the present study we aimed to develop a novel in vitro approach which mimics in vivo complexity, detects neurotoxicity comprehensively, and provides mechanistic insight. For this purpose we combined rat primary re-aggregating brain cell cultures with a mass spectrometry (MS)-based metabolomics approach. For the proof of principle we treated developing re-aggregating brain cell cultures for 48h with the neurotoxicant methyl mercury chloride (0.1-100muM) and the brain stimulant caffeine (1-100muM) and acquired cellular metabolic profiles. To detect toxicant-induced metabolic alterations the profiles were analysed using commercial software which revealed patterns in the multi-parametric dataset by principal component analyses (PCA), and recognised the most significantly altered metabolites. PCA revealed concentration-dependent cluster formations for methyl mercury chloride (0.1-1muM), and treatment-dependent cluster formations for caffeine (1-100muM) at sub-cytotoxic concentrations. Four relevant metabolites responsible for the concentration-dependent alterations following methyl mercury chloride treatment could be identified using MS-MS fragmentation analysis. These were gamma-aminobutyric acid, choline, glutamine, creatine and spermine. Their respective mass ion intensities demonstrated metabolic alterations in line with the literature and suggest that the metabolites could be biomarkers for mechanisms of neurotoxicity or neuroprotection. In addition, we evaluated whether the approach could identify neurotoxic potential by testing eight compounds which have target organ toxicity in the liver, kidney or brain at sub-cytotoxic concentrations. PCA revealed cluster formations largely dependent on target organ toxicity indicating possible potential for the development of a neurotoxicity prediction model. With such results it could be useful to perform a validation study to determine the reliability, relevance and applicability of this approach to neurotoxicity screening. Thus, for the first time we show the benefits and utility of in vitro metabolomics to comprehensively detect neurotoxicity and to discover new biomarkers.
Resumo:
When researchers introduce a new test they have to demonstrate that it is valid, using unbiased designs and suitable statistical procedures. In this article we use Monte Carlo analyses to highlight how incorrect statistical procedures (i.e., stepwise regression, extreme scores analyses) or ignoring regression assumptions (e.g., heteroscedasticity) contribute to wrong validity estimates. Beyond these demonstrations, and as an example, we re-examined the results reported by Warwick, Nettelbeck, and Ward (2010) concerning the validity of the Ability Emotional Intelligence Measure (AEIM). Warwick et al. used the wrong statistical procedures to conclude that the AEIM was incrementally valid beyond intelligence and personality traits in predicting various outcomes. In our re-analysis, we found that the reliability-corrected multiple correlation of their measures with personality and intelligence was up to .69. Using robust statistical procedures and appropriate controls, we also found that the AEIM did not predict incremental variance in GPA, stress, loneliness, or well-being, demonstrating the importance for testing validity instead of looking for it.
Resumo:
The characterization and categorization of coarse aggregates for use in portland cement concrete (PCC) pavements is a highly refined process at the Iowa Department of Transportation. Over the past 10 to 15 years, much effort has been directed at pursuing direct testing schemes to supplement or replace existing physical testing schemes. Direct testing refers to the process of directly measuring the chemical and mineralogical properties of an aggregate and then attempting to correlate those measured properties to historical performance information (i.e., field service record). This is in contrast to indirect measurement techniques, which generally attempt to extrapolate the performance of laboratory test specimens to expected field performance. The purpose of this research project was to investigate and refine the use of direct testing methods, such as X-ray analysis techniques and thermal analysis techniques, to categorize carbonate aggregates for use in portland cement concrete. The results of this study indicated that the general testing methods that are currently used to obtain data for estimating service life tend to be very reliable and have good to excellent repeatability. Several changes in the current techniques were recommended to enhance the long-term reliability of the carbonate database. These changes can be summarized as follows: (a) Limits that are more stringent need to be set on the maximum particle size in the samples subjected to testing. This should help to improve the reliability of all three of the test methods studied during this project. (b) X-ray diffraction testing needs to be refined to incorporate the use of an internal standard. This will help to minimize the influence of sample positioning errors and it will also allow for the calculation of the concentration of the various minerals present in the samples. (c) Thermal analysis data needs to be corrected for moisture content and clay content prior to calculating the carbonate content of the sample.
Resumo:
Tämä työ luo katsauksen ajallisiin ja stokastisiin ohjelmien luotettavuus malleihin sekä tutkii muutamia malleja käytännössä. Työn teoriaosuus sisältää ohjelmien luotettavuuden kuvauksessa ja arvioinnissa käytetyt keskeiset määritelmät ja metriikan sekä varsinaiset mallien kuvaukset. Työssä esitellään kaksi ohjelmien luotettavuusryhmää. Ensimmäinen ryhmä ovat riskiin perustuvat mallit. Toinen ryhmä käsittää virheiden ”kylvöön” ja merkitsevyyteen perustuvat mallit. Työn empiirinen osa sisältää kokeiden kuvaukset ja tulokset. Kokeet suoritettiin käyttämällä kolmea ensimmäiseen ryhmään kuuluvaa mallia: Jelinski-Moranda mallia, ensimmäistä geometrista mallia sekä yksinkertaista eksponenttimallia. Kokeiden tarkoituksena oli tutkia, kuinka syötetyn datan distribuutio vaikuttaa mallien toimivuuteen sekä kuinka herkkiä mallit ovat syötetyn datan määrän muutoksille. Jelinski-Moranda malli osoittautui herkimmäksi distribuutiolle konvergaatio-ongelmien vuoksi, ensimmäinen geometrinen malli herkimmäksi datan määrän muutoksille.
Resumo:
Teollusuussovelluksissa vaaditaan nykyisin yhä useammin reaaliaikaista tiedon käsittelyä. Luotettavuus on yksi tärkeimmistä reaaliaikaiseen tiedonkäsittelyyn kykenevän järjestelmän ominaisuuksista. Sen saavuttamiseksi on sekä laitteisto, että ohjelmisto testattava. Tämän työn päätavoitteena on laitteiston testaaminen ja laitteiston testattavuus, koska luotettava laitteistoalusta on perusta tulevaisuuden reaaliaikajärjestelmille. Diplomityössä esitetään digitaaliseen signaalinkäsittelyyn soveltuvan prosessorikortin suunnittelu. Prosessorikortti on tarkoitettu sähkökoneiden ennakoivaa kunnonvalvontaa varten. Uusimmat DFT (Desing for Testability) menetelmät esitellään ja niitä sovelletaan prosessorikortin sunnittelussa yhdessä vanhempien menetelmien kanssa. Kokemukset ja huomiot menetelmien soveltuvuudesta raportoidaan työn lopussa. Työn tavoitteena on kehittää osakomponentti web -pohjaiseen valvontajärjestelmään, jota on kehitetty Sähkötekniikan osastolla Lappeenrannan teknillisellä korkeakoululla.
Resumo:
The problem of software (SW) defaults is becoming more and more topical because of increasing amount of the SW and its complication. The majority of these defaults are founded during the test part that consumes about 40-50% of the development efforts. Test automation allows reducing the cost of this process and increasing testing effectiveness. In the middle of 1980 the first tools for automated testing appeared and the automated process was implemented in different kinds of SW testing. In short time, it became obviously, automated testing can cause many problems such as increasing product cost, decreasing reliability and even project fail. This thesis describes automated testing process, its concept, lists main problems, and gives an algorithm for automated test tools selection. Also this work presents an overview of the main automated test tools for embedded systems.
Resumo:
To date there is no documented procedure to extrapolate findings of an isometric nature to a whole body performance setting. The purpose of this study was to quantify the reliability of perceived exertion to control neuromuscular output during an isometric contraction. 21 varsity athletes completed a maximal voluntary contraction and a 2 min constant force contraction at both the start and end of the study. Between pre and post testing all participants completed a 2 min constant perceived exertion contraction once a day for 4 days. Intra-class correlation coefficient (R=O.949) and standard error of measurement (SEM=5.12 Nm) concluded that the isometric contraction was reliable. Limits of agreement demonstrated only moderate initial reliability, yet with smaller limits towards the end of 4 training sessions. In conclusion, athlete's na"ive to a constant effort isometric contraction will produce reliable and acceptably stable results after 1 familiarization sessions has been completed.
Resumo:
Accelerated life testing (ALT) is widely used to obtain reliability information about a product within a limited time frame. The Cox s proportional hazards (PH) model is often utilized for reliability prediction. My master thesis research focuses on designing accelerated life testing experiments for reliability estimation. We consider multiple step-stress ALT plans with censoring. The optimal stress levels and times of changing the stress levels are investigated. We discuss the optimal designs under three optimality criteria. They are D-, A- and Q-optimal designs. We note that the classical designs are optimal only if the model assumed is correct. Due to the nature of prediction made from ALT experimental data, attained under the stress levels higher than the normal condition, extrapolation is encountered. In such case, the assumed model cannot be tested. Therefore, for possible imprecision in the assumed PH model, the method of construction for robust designs is also explored.