874 resultados para VALIDITY OF TESTS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective. The aim of this study was to verify the possibility of lactate minimum (LM) determination during a walking test and the validity of such LM protocol on predicting the maximal lactate steady-state (MLSS) intensity. Design. Eleven healthy subjects (24.2 ± 4.5 yr; 74.3 ± 7.7 kg; 176.9 ± 4.1 cm) performed LM tests on a treadmill, consisting of walking at 5.5 km h -1 and with 20-22% of inclination until voluntary exhaustion to induce metabolic acidosis. After 7 minutes of recovery the participants performed an incremental test starting at 7% incline with increments of 2% at each 3 minutes until exhaustion. A polynomial modeling approach (LMp) and a visual inspection (LMv) were used to identify the LM as the exercise intensity associated to the lowest [bLac] during the test. Participants also underwent to 24 constant intensity tests of 30 minutes to determine the MLSS intensity. Results. There were no differences among LMv (12.6 ± 1.7 %), LMp (13.1 ± 1.5 %), and MLSS (13.6 ± 2.1 %) and the Bland and Altman plots evidenced acceptable agreement between them. Conclusion. It was possible to identify the LM during walking tests with intensity imposed by treadmill inclination, and it seemed to be valid on identifying the exercise intensity associated to the MLSS. Copyright © 2012 Guilherme Morais Puga et al.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ABSTRACT Background: Patients with dementia may be unable to describe their symptoms, and caregivers frequently suffer emotional burden that can interfere with judgment of the patient's behavior. The Neuropsychiatric Inventory-Clinician rating scale (NPI-C) was therefore developed as a comprehensive and versatile instrument to assess and accurately measure neuropsychiatric symptoms (NPS) in dementia, thereby using information from caregiver and patient interviews, and any other relevant available data. The present study is a follow-up to the original, cross-national NPI-C validation, evaluating the reliability and concurrent validity of the NPI-C in quantifying psychopathological symptoms in dementia in a large Brazilian cohort. Methods: Two blinded raters evaluated 312 participants (156 patient-knowledgeable informant dyads) using the NPI-C for a total of 624 observations in five Brazilian centers. Inter-rater reliability was determined through intraclass correlation coefficients for the NPI-C domains and the traditional NPI. Convergent validity included correlations of specific domains of the NPI-C with the Brief Psychiatric Rating Scale (BPRS), the Cohen-Mansfield Agitation Index (CMAI), the Cornell Scale for Depression in Dementia (CSDD), and the Apathy Inventory (AI). Results: Inter-rater reliability was strong for all NPI-C domains. There were high correlations between NPI-C/delusions and BPRS, NPI-C/apathy-indifference with the AI, NPI-C/depression-dysphoria with the CSDD, NPI-C/agitation with the CMAI, and NPI-C/aggression with the CMAI. There was moderate correlation between the NPI-C/aberrant vocalizations and CMAI and the NPI-C/hallucinations with the BPRS. Conclusion: The NPI-C is a comprehensive tool that provides accurate measurement of NPS in dementia with high concurrent validity and inter-rater reliability in the Brazilian setting. In addition to universal assessment, the NPI-C can be completed by individual domains. © International Psychogeriatric Association 2013.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Inspection for corrosion of gas storage spheres at the welding seam lines must be done periodically. Until now this inspection is being done manually and has a high cost associated to it and a high risk of inspection personel injuries. The Brazilian Petroleum Company, Petrobras, is seeking cost reduction and personel safety by the use of autonomous robot technology. This paper presents the development of a robot capable of autonomously follow a welding line and transporting corrosion measurement sensors. The robot uses a pair of sensors each composed of a laser source and a video camera that allows the estimation of the center of the welding line. The mechanical robot uses four magnetic wheels to adhere to the sphere's surface and was constructed in a way that always three wheels are in contact with the sphere's metallic surface which guarantees enough magnetic atraction to hold the robot in the sphere's surface all the time. Additionally, an independently actuated table for attaching the corrosion inspection sensors was included for small position corrections. Tests were conducted at the laboratory and in a real sphere showing the validity of the proposed approach and implementation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: The aim of this study was to analyze the criteria employed for the requesting of preoperative tests among maxillofacial surgeons. Materials and methods: Thirty maxillofacial surgeons working in Aracaju (Brazil) received a questionnaire to fill out. The study inquired about the practice of requesting preoperative tests for healthy patients scheduled to undergo elective surgery. Results: Most of the surgeons interviewed requested tests that are not recommended for the case in question. The highest frequency of requests was a complete blood count, coagulation test, blood glucose test and chest radiograph. Conclusion: The absence of strict rules for the requesting of preoperative tests causes uncertainty and a lack of criteria regarding pre-surgical conduct. It was not possible to clearly define the criteria used by surgeons for requesting such tests, as the clinical characteristics of the hypothetical case presented suggest a smaller number of tests. (C) 2011 European Association for Cranio-Maxillo-Facial Surgery.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: The frequent occurrence of inconclusive serology in blood banks and the absence of a gold standard test for Chagas'disease led us to examine the efficacy of the blood culture test and five commercial tests (ELISA, IIF, HAI, c-ELISA, rec-ELISA) used in screening blood donors for Chagas disease, as well as to investigate the prevalence of Trypanosoma cruzi infection among donors with inconclusive serology screening in respect to some epidemiological variables. METHODS: To obtain estimates of interest we considered a Bayesian latent class model with inclusion of covariates from the logit link. RESULTS: A better performance was observed with some categories of epidemiological variables. In addition, all pairs of tests (excluding the blood culture test) presented as good alternatives for both screening (sensitivity > 99.96% in parallel testing) and for confirmation (specificity > 99.93% in serial testing) of Chagas disease. The prevalence of 13.30% observed in the stratum of donors with inconclusive serology, means that probably most of these are non-reactive serology. In addition, depending on the level of specific epidemiological variables, the absence of infection can be predicted with a probability of 100% in this group from the pairs of tests using parallel testing. CONCLUSION: The epidemiological variables can lead to improved test results and thus assist in the clarification of inconclusive serology screening results. Moreover, all combinations of pairs using the five commercial tests are good alternatives to confirm results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several activities were conducted during my PhD activity. For the NEMO experiment a collaboration between the INFN/University groups of Catania and Bologna led to the development and production of a mixed signal acquisition board for the Nemo Km3 telescope. The research concerned the feasibility study for a different acquisition technique quite far from that adopted in the NEMO Phase 1 telescope. The DAQ board that we realized exploits the LIRA06 front-end chip for the analog acquisition of anodic an dynodic sources of a PMT (Photo-Multiplier Tube). The low-power analog acquisition allows to sample contemporaneously multiple channels of the PMT at different gain factors in order to increase the signal response linearity over a wider dynamic range. Also the auto triggering and self-event-classification features help to improve the acquisition performance and the knowledge on the neutrino event. A fully functional interface towards the first level data concentrator, the Floor Control Module, has been integrated as well on the board, and a specific firmware has been realized to comply with the present communication protocols. This stage of the project foresees the use of an FPGA, a high speed configurable device, to provide the board with a flexible digital logic control core. After the validation of the whole front-end architecture this feature would be probably integrated in a common mixed-signal ASIC (Application Specific Integrated Circuit). The volatile nature of the configuration memory of the FPGA implied the integration of a flash ISP (In System Programming) memory and a smart architecture for a safe remote reconfiguration of it. All the integrated features of the board have been tested. At the Catania laboratory the behavior of the LIRA chip has been investigated in the digital environment of the DAQ board and we succeeded in driving the acquisition with the FPGA. The PMT pulses generated with an arbitrary waveform generator were correctly triggered and acquired by the analog chip, and successively they were digitized by the on board ADC under the supervision of the FPGA. For the communication towards the data concentrator a test bench has been realized in Bologna where, thanks to a lending of the Roma University and INFN, a full readout chain equivalent to that present in the NEMO phase-1 was installed. These tests showed a good behavior of the digital electronic that was able to receive and to execute command imparted by the PC console and to answer back with a reply. The remotely configurable logic behaved well too and demonstrated, at least in principle, the validity of this technique. A new prototype board is now under development at the Catania laboratory as an evolution of the one described above. This board is going to be deployed within the NEMO Phase-2 tower in one of its floors dedicated to new front-end proposals. This board will integrate a new analog acquisition chip called SAS (Smart Auto-triggering Sampler) introducing thus a new analog front-end but inheriting most of the digital logic present in the current DAQ board discussed in this thesis. For what concern the activity on high-resolution vertex detectors, I worked within the SLIM5 collaboration for the characterization of a MAPS (Monolithic Active Pixel Sensor) device called APSEL-4D. The mentioned chip is a matrix of 4096 active pixel sensors with deep N-well implantations meant for charge collection and to shield the analog electronics from digital noise. The chip integrates the full-custom sensors matrix and the sparsifification/readout logic realized with standard-cells in STM CMOS technology 130 nm. For the chip characterization a test-beam has been set up on the 12 GeV PS (Proton Synchrotron) line facility at CERN of Geneva (CH). The collaboration prepared a silicon strip telescope and a DAQ system (hardware and software) for data acquisition and control of the telescope that allowed to store about 90 million events in 7 equivalent days of live-time of the beam. My activities concerned basically the realization of a firmware interface towards and from the MAPS chip in order to integrate it on the general DAQ system. Thereafter I worked on the DAQ software to implement on it a proper Slow Control interface of the APSEL4D. Several APSEL4D chips with different thinning have been tested during the test beam. Those with 100 and 300 um presented an overall efficiency of about 90% imparting a threshold of 450 electrons. The test-beam allowed to estimate also the resolution of the pixel sensor providing good results consistent with the pitch/sqrt(12) formula. The MAPS intrinsic resolution has been extracted from the width of the residual plot taking into account the multiple scattering effect.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present dissertation focuses on burnout and work engagement among teachers, with especial focus on the Job-Demands Resources Model: Chapter 1 focuses on teacher burnout. It aims to investigate the role of efficacy beliefs using negatively worded inefficacy items instead of positive ones and to establish whether depersonalization and cynism can be considered two different dimensions of the teacher burnout syndrome. Chapter 2 investigates the factorial validity of the instruments used to measure work engagement (i.e. Utrecht Work Engagement Scale, UWES-17 and UWES-9). Moreover, because the current study is partly longitudinal in nature, also the stability across time of engagement can be investigated. Finally, based on cluster-analyses, two groups that differ in levels of engagement are compared as far as their job- and personal resources (i.e. possibilities for personal development, work-life balance, and self-efficacy), positive organizational attitudes and behaviours (i.e., job satisfaction and organizational citizenship behaviour) and perceived health are concerned. Chapter 3 tests the JD-R model in a longitudinal way, by integrating also the role of personal resources (i.e. self-efficacy). This chapter seeks answers to questions on what are the most important job demands, job and personal resources contributing to discriminate burned-out teachers from non-burned-out teachers, as well as engaged teachers from non-engaged teachers. Chapter 4 uses a diary study to extend knowledge about the dynamic nature of the JD-R model by considering between- and within-person variations with regard to both motivational and health impairment processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A control-oriented model of a Dual Clutch Transmission was developed for real-time Hardware In the Loop (HIL) applications, to support model-based development of the DCT controller. The model is an innovative attempt to reproduce the fast dynamics of the actuation system while maintaining a step size large enough for real-time applications. The model comprehends a detailed physical description of hydraulic circuit, clutches, synchronizers and gears, and simplified vehicle and internal combustion engine sub-models. As the oil circulating in the system has a large bulk modulus, the pressure dynamics are very fast, possibly causing instability in a real-time simulation; the same challenge involves the servo valves dynamics, due to the very small masses of the moving elements. Therefore, the hydraulic circuit model has been modified and simplified without losing physical validity, in order to adapt it to the real-time simulation requirements. The results of offline simulations have been compared to on-board measurements to verify the validity of the developed model, that was then implemented in a HIL system and connected to the TCU (Transmission Control Unit). Several tests have been performed: electrical failure tests on sensors and actuators, hydraulic and mechanical failure tests on hydraulic valves, clutches and synchronizers, and application tests comprehending all the main features of the control performed by the TCU. Being based on physical laws, in every condition the model simulates a plausible reaction of the system. The first intensive use of the HIL application led to the validation of the new safety strategies implemented inside the TCU software. A test automation procedure has been developed to permit the execution of a pattern of tests without the interaction of the user; fully repeatable tests can be performed for non-regression verification, allowing the testing of new software releases in fully automatic mode.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The electromagnetic form factors of the proton are fundamental quantities sensitive to the distribution of charge and magnetization inside the proton. Precise knowledge of the form factors, in particular of the charge and magnetization radii provide strong tests for theory in the non-perturbative regime of QCD. However, the existing data at Q^2 below 1 (GeV/c)^2 are not precise enough for a hard test of theoretical predictions.rnrnFor a more precise determination of the form factors, within this work more than 1400 cross sections of the reaction H(e,e′)p were measured at the Mainz Microtron MAMI using the 3-spectrometer-facility of the A1-collaboration. The data were taken in three periods in the years 2006 and 2007 using beam energies of 180, 315, 450, 585, 720 and 855 MeV. They cover the Q^2 region from 0.004 to 1 (GeV/c)^2 with counting rate uncertainties below 0.2% for most of the data points. The relative luminosity of the measurements was determined using one of the spectrometers as a luminosity monitor. The overlapping acceptances of the measurements maximize the internal redundancy of the data and allow, together with several additions to the standard experimental setup, for tight control of systematic uncertainties.rnTo account for the radiative processes, an event generator was developed and implemented in the simulation package of the analysis software which works without peaking approximation by explicitly calculating the Bethe-Heitler and Born Feynman diagrams for each event.rnTo separate the form factors and to determine the radii, the data were analyzed by fitting a wide selection of form factor models directly to the measured cross sections. These fits also determined the absolute normalization of the different data subsets. The validity of this method was tested with extensive simulations. The results were compared to an extraction via the standard Rosenbluth technique.rnrnThe dip structure in G_E that was seen in the analysis of the previous world data shows up in a modified form. When compared to the standard-dipole form factor as a smooth curve, the extracted G_E exhibits a strong change of the slope around 0.1 (GeV/c)^2, and in the magnetic form factor a dip around 0.2 (GeV/c)^2 is found. This may be taken as indications for a pion cloud. For higher Q^2, the fits yield larger values for G_M than previous measurements, in agreement with form factor ratios from recent precise polarized measurements in the Q2 region up to 0.6 (GeV/c)^2.rnrnThe charge and magnetic rms radii are determined as rn⟨r_e⟩=0.879 ± 0.005(stat.) ± 0.004(syst.) ± 0.002(model) ± 0.004(group) fm,rn⟨r_m⟩=0.777 ± 0.013(stat.) ± 0.009(syst.) ± 0.005(model) ± 0.002(group) fm.rnThis charge radius is significantly larger than theoretical predictions and than the radius of the standard dipole. However, it is in agreement with earlier results measured at the Mainz linear accelerator and with determinations from Hydrogen Lamb shift measurements. The extracted magnetic radius is smaller than previous determinations and than the standard-dipole value.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes the ultra-precise determination of the g-factor of the electron bound to hydrogenlike 28Si13+. The experiment is based on the simultaneous determination of the cyclotron- and Larmor frequency of a single ion, which is stored in a triple Penning-trap setup. The continuous Stern-Gerlach effect is used to couple the spin of the bound electron to the motional frequencies of the ion via a magnetic bottle, which allows the non-destructive determination of the spin state. To this end, a highly sensitive, cryogenic detection system was developed, which allowed the direct, non-destructive detection of the eigenfrequencies with the required precision.rnThe development of a novel, phase sensitive detection technique finally allowed the determination of the g-factor with a relative accuracy of 40 ppt, which was previously inconceivable. The comparison of the hereby determined value with the value predicted by quantumelectrodynamics (QED) allows the verification of the validity of this fundamental theory under the extreme conditions of the strong binding potential of a highly charged ion. The exact agreement of theory and experiment is an impressive demonstration of the exactness of QED. The experimental possibilities created in this work will allow in the near future not only further tests of theory, but also the determination of the mass of the electron with a precision that exceeds the current literature value by more than an order of magnitude.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tuberculosis (TB) in South American camelids (SAC) is caused by Mycobacterium bovis or Mycobacterium microti. Two serological methods, rapid testing (RT) and the dual-path platform (DPP) assay, were evaluated using naturally infected SAC. The study population included 156 alpacas and 175 llamas in Great Britain, Switzerland, and the United States. TB due to M. bovis (n = 44) or M. microti (n = 8) in 35 alpacas and 17 llamas was diagnosed by gross pathology examination and culture. Control animals were from herds with no TB history. The RT and the DPP assay showed sensitivities of 71% and 74%, respectively, for alpacas, while the sensitivity for llamas was 77% for both assays. The specificity of the DPP assay (98%) was higher than that of RT (94%) for llamas; the specificities of the two assays were identical (98%) for alpacas. When the two antibody tests were combined, the parallel-testing interpretation (applied when either assay produced a positive result) enhanced the sensitivities of antibody detection to 89% for alpacas and 88% for llamas but at the cost of lower specificities (97% and 93%, respectively), whereas the serial-testing interpretation (applied when both assays produced a positive result) maximized the specificity to 100% for both SAC species, although the sensitivities were 57% for alpacas and 65% for llamas. Over 95% of the animals with evidence of TB failed to produce skin test reactions, thus confirming concerns about the validity of this method for testing SAC. The findings suggest that serological assays may offer a more accurate and practical alternative for antemortem detection of camelid TB.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The short, portable mental status questionnaire (SPMSQ) developed by Pfeiffer has several advantages over previous short instruments designed to assess the intellectual functioning of older adults. It is based upon data from both institutionalized and community-dwelling elderly. Although Pfeiffer a four-group classification, he used to groups in his initial validation study: (a) intact/mildly impaired, and (b) moderately/severely impaired. The present study compared clinicians' ratings with those based upon the SPMSQ scores, and examined the validity of the four-group classification. The sample included 181 subjects from seven intermediate care facilities and nine home-care agencies. All were assessed by the OARS questionnaire, which includes the SPMSQ Three discriminant analyses were performed with three different criteria, for two-group, three-group, and four-group models. Results indicated that the two-group model (intact/mildly impaired and moderately/severely impaired) permitted significant discrimination. The four-group model, however, gave less distinct results. In particular, patients who were mildly intellectually impaired could not be clearly distinguished from those who were intact and from those who were moderately impaired. The three-group model (minimally, moderately, severely impaired) seemed to offer the best compromise between the gross dichotomy of the original two-model system and the less accurate four category system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the training of healthcare professionals, one of the advantages of communication training with simulated patients (SPs) is the SP's ability to provide direct feedback to students after a simulated clinical encounter. The quality of SP feedback must be monitored, especially because it is well known that feedback can have a profound effect on student performance. Due to the current lack of valid and reliable instruments to assess the quality of SP feedback, our study examined the validity and reliability of one potential instrument, the 'modified Quality of Simulated Patient Feedback Form' (mQSF). Methods Content validity of the mQSF was assessed by inviting experts in the area of simulated clinical encounters to rate the importance of the mQSF items. Moreover, generalizability theory was used to examine the reliability of the mQSF. Our data came from videotapes of clinical encounters between six simulated patients and six students and the ensuing feedback from the SPs to the students. Ten faculty members judged the SP feedback according to the items on the mQSF. Three weeks later, this procedure was repeated with the same faculty members and recordings. Results All but two items of the mQSF received importance ratings of > 2.5 on a four-point rating scale. A generalizability coefficient of 0.77 was established with two judges observing one encounter. Conclusions The findings for content validity and reliability with two judges suggest that the mQSF is a valid and reliable instrument to assess the quality of feedback provided by simulated patients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To review systematically and critically, evidence used to derive estimates of costs and cost effectiveness of chlamydia screening. METHODS: Systematic review. A search of 11 electronic bibliographic databases from the earliest date available to August 2004 using keywords including chlamydia, pelvic inflammatory disease, economic evaluation, and cost. We included studies of chlamydia screening in males and/or females over 14 years, including studies of diagnostic tests, contact tracing, and treatment as part of a screening programme. Outcomes included cases of chlamydia identified and major outcomes averted. We assessed methodological quality and the modelling approach used. RESULTS: Of 713 identified papers we included 57 formal economic evaluations and two cost studies. Most studies found chlamydia screening to be cost effective, partner notification to be an effective adjunct, and testing with nucleic acid amplification tests, and treatment with azithromycin to be cost effective. Methodological problems limited the validity of these findings: most studies used static models that are inappropriate for infectious diseases; restricted outcomes were used as a basis for policy recommendations; and high estimates of the probability of chlamydia associated complications might have overestimated cost effectiveness. Two high quality dynamic modelling studies found opportunistic screening to be cost effective but poor reporting or uncertainty about complication rates make interpretation difficult. CONCLUSION: The inappropriate use of static models to study interventions to prevent a communicable disease means that uncertainty remains about whether chlamydia screening programmes are cost effective or not. The results of this review can be used by health service managers in the allocation of resources, and health economists and other researchers who are considering further research in this area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: In clinical practice a diagnosis is based on a combination of clinical history, physical examination and additional diagnostic tests. At present, studies on diagnostic research often report the accuracy of tests without taking into account the information already known from history and examination. Due to this lack of information, together with variations in design and quality of studies, conventional meta-analyses based on these studies will not show the accuracy of the tests in real practice. By using individual patient data (IPD) to perform meta-analyses, the accuracy of tests can be assessed in relation to other patient characteristics and allows the development or evaluation of diagnostic algorithms for individual patients. In this study we will examine these potential benefits in four clinical diagnostic problems in the field of gynaecology, obstetrics and reproductive medicine. METHODS/DESIGN: Based on earlier systematic reviews for each of the four clinical problems, studies are considered for inclusion. The first authors of the included studies will be invited to participate and share their original data. After assessment of validity and completeness the acquired datasets are merged. Based on these data, a series of analyses will be performed, including a systematic comparison of the results of the IPD meta-analysis with those of a conventional meta-analysis, development of multivariable models for clinical history alone and for the combination of history, physical examination and relevant diagnostic tests and development of clinical prediction rules for the individual patients. These will be made accessible for clinicians. DISCUSSION: The use of IPD meta-analysis will allow evaluating accuracy of diagnostic tests in relation to other relevant information. Ultimately, this could increase the efficiency of the diagnostic work-up, e.g. by reducing the need for invasive tests and/or improving the accuracy of the diagnostic workup. This study will assess whether these benefits of IPD meta-analysis over conventional meta-analysis can be exploited and will provide a framework for future IPD meta-analyses in diagnostic and prognostic research.