909 resultados para CLINICAL UTILITY
Resumo:
With the increasing availability of high quality digital cameras that are easily operated by the non-professional photographer, the utility of using digital images to assess endpoints in clinical research of skin lesions has growing acceptance. However, rigorous protocols and description of experiences for digital image collection and assessment are not readily available, particularly for research conducted in remote settings. We describe the development and evaluation of a protocol for digital image collection by the non-professional photographer in a remote setting research trial, together with a novel methodology for assessment of clinical outcomes by an expert panel blinded to treatment allocation.
Resumo:
Giant Cell Arteritis (GCA) is the most common vasculitis affecting the elderly. Archived formalin-fixed paraffin-embedded (FFPE) temporal artery biopsy (TAB) specimens potentially represent a valuable resource for large-scale genetic analysis of this disease. FFPE TAB samples were obtained from 12 patients with GCA. Extracted TAB DNA was assessed by real time PCR before restoration using the Illumina HD FFPE Restore Kit. Paired FFPE-blood samples were genotyped on the Illumina OmniExpress FFPE microarray. The FFPE samples that passed stringent quality control measures had a mean genotyping success of >97%. When compared with their matching peripheral blood DNA, the mean discordant heterozygote and homozygote single nucleotide polymorphisms calls were 0.0028 and 0.0003, respectively, which is within the accepted tolerance of reproducibility. This work demonstrates that it is possible to successfully obtain high-quality microarray-based genotypes FFPE TAB samples and that this data is similar to that obtained from peripheral blood.
Resumo:
Background: Clinicians frequently use their own judgement to assess patient’s hydration status although there is limited evidence for the diagnostic utility of any individual clinical symptom. Hence, the aim of this study was to compare the diagnostic accuracy of clinically assessed dehydration in older hospital patients (using multiple symptoms), versus dehydration measured using serum-calculated osmolality (CO) as the reference standard. Method: Participants were 44 hospital patients aged ≥ 60 years. Dehydration was assessed clinically and pathologically (CO) within 24 hours of admission and at study exit. Indicators of diagnostic accuracy were calculated. Results: Clinicians identified 27% of patients as dehydrated at admission, and 19% at exit, compared to 19% and 16% using CO. Agreement between the measures was fair at admission and poor at exit. Clinical assessment showed poor sensitivity for predicting dehydration with reasonable specificity. Conclusions: Compared to the use of CO, clinical assessment of dehydration in older patients was poor. Given that failure to identify dehydration in this population may have serious consequences, we recommend that clinicians do not rely upon their own assessments without also using the reference standard.
Resumo:
Background: Poor outcomes of invasive candidiasis (IC) are associated with the difficulty in establishing the microbiological diagnosis at an early stage. New scores and laboratory tests have been developed in order to make an early therapeutic intervention in an attempt to reduce the high mortality associated with invasive fungal infections. Candida albicans IFA IgG has been recently commercialized for germ tube antibody detection (CAGTA). This test provides a rapid and simple diagnosis of IC (84.4% sensitivity and 94.7% specificity). The aim of this study is to identify the patients who could be benefited by the use of CAGTA test in critical care setting. Methods: A prospective, cohort, observational multicentre study was carried out in six medical/surgical Intensive care units (ICU) of tertiary-care Spanish hospitals. Candida albicans Germ Tube Antibody test was performed twice a week if predetermined risk factors were present, and serologically demonstrated candidiasis was considered if the testing serum dilution was >= 1: 160 in at least one sample and no other microbiological evidence of invasive candidiasis was found. Results: Fifty-three critically ill non-neutropenic patients (37.7% post surgery) were included. Twenty-two patients (41.5%) had CAGTA-positive results, none of them with positive blood culture for Candida. Neither corrected colonization index nor antifungal treatment had influence on CAGTA results. This finding could corroborate that the CAGTA may be an important biomarker to distinguish between colonization and infection in these patients. The presence of acute renal failure at the beginning of the study was more frequent in CAGTA-negative patients. Previous surgery was statistically more frequent in CAGTA-positive patients. Conclusions: This study identified previous surgery as the principal clinical factor associated with CAGTA-positive results and emphasises the utility of this promising technique, which was not influenced by high Candida colonization or antifungal treatment. Our results suggest that detection of CAGTA may be important for the diagnosis of invasive candidiasis in surgical patients admitted in ICU.
Resumo:
Intraoperative assessment of surgical margins is critical to ensuring residual tumor does not remain in a patient. Previously, we developed a fluorescence structured illumination microscope (SIM) system with a single-shot field of view (FOV) of 2.1 × 1.6 mm (3.4 mm2) and sub-cellular resolution (4.4 μm). The goal of this study was to test the utility of this technology for the detection of residual disease in a genetically engineered mouse model of sarcoma. Primary soft tissue sarcomas were generated in the hindlimb and after the tumor was surgically removed, the relevant margin was stained with acridine orange (AO), a vital stain that brightly stains cell nuclei and fibrous tissues. The tissues were imaged with the SIM system with the primary goal of visualizing fluorescent features from tumor nuclei. Given the heterogeneity of the background tissue (presence of adipose tissue and muscle), an algorithm known as maximally stable extremal regions (MSER) was optimized and applied to the images to specifically segment nuclear features. A logistic regression model was used to classify a tissue site as positive or negative by calculating area fraction and shape of the segmented features that were present and the resulting receiver operator curve (ROC) was generated by varying the probability threshold. Based on the ROC curves, the model was able to classify tumor and normal tissue with 77% sensitivity and 81% specificity (Youden's index). For an unbiased measure of the model performance, it was applied to a separate validation dataset that resulted in 73% sensitivity and 80% specificity. When this approach was applied to representative whole margins, for a tumor probability threshold of 50%, only 1.2% of all regions from the negative margin exceeded this threshold, while over 14.8% of all regions from the positive margin exceeded this threshold.
Resumo:
BACKGROUND: Risk assessment with a thorough family health history is recommended by numerous organizations and is now a required component of the annual physical for Medicare beneficiaries under the Affordable Care Act. However, there are several barriers to incorporating robust risk assessments into routine care. MeTree, a web-based patient-facing health risk assessment tool, was developed with the aim of overcoming these barriers. In order to better understand what factors will be instrumental for broader adoption of risk assessment programs like MeTree in clinical settings, we obtained funding to perform a type III hybrid implementation-effectiveness study in primary care clinics at five diverse healthcare systems. Here, we describe the study's protocol. METHODS/DESIGN: MeTree collects personal medical information and a three-generation family health history from patients on 98 conditions. Using algorithms built entirely from current clinical guidelines, it provides clinical decision support to providers and patients on 30 conditions. All adult patients with an upcoming well-visit appointment at one of the 20 intervention clinics are eligible to participate. Patient-oriented risk reports are provided in real time. Provider-oriented risk reports are uploaded to the electronic medical record for review at the time of the appointment. Implementation outcomes are enrollment rate of clinics, providers, and patients (enrolled vs approached) and their representativeness compared to the underlying population. Primary effectiveness outcomes are the percent of participants newly identified as being at increased risk for one of the clinical decision support conditions and the percent with appropriate risk-based screening. Secondary outcomes include percent change in those meeting goals for a healthy lifestyle (diet, exercise, and smoking). Outcomes are measured through electronic medical record data abstraction, patient surveys, and surveys/qualitative interviews of clinical staff. DISCUSSION: This study evaluates factors that are critical to successful implementation of a web-based risk assessment tool into routine clinical care in a variety of healthcare settings. The result will identify resource needs and potential barriers and solutions to implementation in each setting as well as an understanding potential effectiveness. TRIAL REGISTRATION: NCT01956773.
Resumo:
Background: Unexplained persistent breathlessness in patients with difficult asthma despite multiple treatments is a common clinical problem. Cardiopulmonary exercise testing (CPX) may help identify the mechanism causing these symptoms, allowing appropriate management.
Methods: This was a retrospective analysis of patients attending a specialist-provided service for difficult asthma who proceeded to CPX as part of our evaluation protocol. Patient demographics, lung function, and use of health care and rescue medication were compared with those in patients with refractory asthma. Medication use 6 months following CPX was compared with treatment during CPX.
Results: Of 302 sequential referrals, 39 patients underwent CPX. A single explanatory feature was identified in 30 patients and two features in nine patients: hyperventilation (n = 14), exercise-induced bronchoconstriction (n = 8), submaximal test (n = 8), normal test (n = 8), ventilatory limitation (n = 7), deconditioning (n = 2), cardiac ischemia (n = 1). Compared with patients with refractory asthma, patients without “pulmonary limitation” on CPX were prescribed similar doses of inhaled corticosteroid (ICS) (median, 1,300 µg [interquartile range (IQR), 800-2,000 µg] vs 1,800 µg [IQR, 1,000-2,000 µg]) and rescue oral steroid courses in the previous year (median, 5 [1-6] vs 5 [1-6]). In this group 6 months post-CPX, ICS doses were reduced (median, 1,300 µg [IQR, 800-2,000 µg] to 800 µg [IQR, 400-1,000 µg]; P < .001) and additional medication treatment was withdrawn (n = 7). Patients with pulmonary limitation had unchanged ICS doses post CPX and additional therapies were introduced.
Conclusions: In difficult asthma, CPX can confirm that persistent exertional breathlessness is due to asthma but can also identify other contributing factors. Patients with nonpulmonary limitation are prescribed inappropriately high doses of steroid therapy, and CPX can identify the primary mechanism of breathlessness, facilitating steroid reduction.
Resumo:
Rationale: Nonadherence to inhaled corticosteroid therapy (ICS) is a major contributor to poor control in difficult asthma, yet it is challenging to ascertain. Objectives: Identify a test for nonadherence using fractional exhaled nitric oxide (FENO) suppression after directly observed inhaled corticosteroid (DOICS) treatment. Methods: Difficult asthma patients with an elevated FENO (>45 ppb) were recruited as adherent (ICS prescription filling >80%) or nonadherent (filling <50%). They received 7 days of DOICS (budesonide 1,600 µg) and a test for nonadherence based on changes in FENO was developed. Using this test, clinic patients were prospectively classified as adherent or nonadherent and this was then validated against prescription filling records, prednisolone assay, and concordance interview. Measurements and Main Results: After 7 days of DOICS nonadherent (n = 9) compared with adherent subjects (n = 13) had a greater reduction in FENO to 47 ± 21% versus 79 ± 26% of baseline measurement (P = 0.003), which was also evident after 5 days (P = 0.02) and a FENO test for nonadherence (area under the curve = 0.86; 95% confidence interval, 0.68-1.00) was defined. Prospective validation in 40 subjects found the test identified 13 as nonadherent; eight confirmed nonadherence during interview (three of whom had excellent prescription filling but did not take medication), five denied nonadherence, two had poor inhaler technique (unintentional nonadherence), and one also denied nonadherence to prednisolone despite nonadherent blood level. Twenty-seven participants were adherent on testing, which was confirmed in 21. Five admitted poor ICS adherence but of these, four were adherent with oral steroids and one with omalizumab. Conclusions: FENO suppression after DOICS provides an objective test to distinguish adherent from nonadherent patients with difficult asthma. Clinical trial registered with www.clinicaltrials.gov (NCT 01219036). Copyright © 2012 by the American Thoracic Society.
Resumo:
Objectives: To assess whether open angle glaucoma (OAG) screening meets the UK National Screening Committee criteria, to compare screening strategies with case finding, to estimate test parameters, to model estimates of cost and cost-effectiveness, and to identify areas for future research. Data sources: Major electronic databases were searched up to December 2005. Review methods: Screening strategies were developed by wide consultation. Markov submodels were developed to represent screening strategies. Parameter estimates were determined by systematic reviews of epidemiology, economic evaluations of screening, and effectiveness (test accuracy, screening and treatment). Tailored highly sensitive electronic searches were undertaken. Results: Most potential screening tests reviewed had an estimated specificity of 85% or higher. No test was clearly most accurate, with only a few, heterogeneous studies for each test. No randomised controlled trials (RCTs) of screening were identified. Based on two treatment RCTs, early treatment reduces the risk of progression. Extrapolating from this, and assuming accelerated progression with advancing disease severity, without treatment the mean time to blindness in at least one eye was approximately 23 years, compared to 35 years with treatment. Prevalence would have to be about 3-4% in 40 year olds with a screening interval of 10 years to approach cost-effectiveness. It is predicted that screening might be cost-effective in a 50-year-old cohort at a prevalence of 4% with a 10-year screening interval. General population screening at any age, thus, appears not to be cost-effective. Selective screening of groups with higher prevalence (family history, black ethnicity) might be worthwhile, although this would only cover 6% of the population. Extension to include other at-risk cohorts (e.g. myopia and diabetes) would include 37% of the general population, but the prevalence is then too low for screening to be considered cost-effective. Screening using a test with initial automated classification followed by assessment by a specialised optometrist, for test positives, was more cost-effective than initial specialised optometric assessment. The cost-effectiveness of the screening programme was highly sensitive to the perspective on costs (NHS or societal). In the base-case model, the NHS costs of visual impairment were estimated as £669. If annual societal costs were £8800, then screening might be considered cost-effective for a 40-year-old cohort with 1% OAG prevalence assuming a willingness to pay of £30,000 per quality-adjusted life-year. Of lesser importance were changes to estimates of attendance for sight tests, incidence of OAG, rate of progression and utility values for each stage of OAG severity. Cost-effectiveness was not particularly sensitive to the accuracy of screening tests within the ranges observed. However, a highly specific test is required to reduce large numbers of false-positive referrals. The findings that population screening is unlikely to be cost-effective are based on an economic model whose parameter estimates have considerable uncertainty, in particular, if rate of progression and/or costs of visual impairment are higher than estimated then screening could be cost-effective. Conclusions: While population screening is not cost-effective, the targeted screening of high-risk groups may be. Procedures for identifying those at risk, for quality assuring the programme, as well as adequate service provision for those screened positive would all be needed. Glaucoma detection can be improved by increasing attendance for eye examination, and improving the performance of current testing by either refining practice or adding in a technology-based first assessment, the latter being the more cost-effective option. This has implications for any future organisational changes in community eye-care services. Further research should aim to develop and provide quality data to populate the economic model, by conducting a feasibility study of interventions to improve detection, by obtaining further data on costs of blindness, risk of progression and health outcomes, and by conducting an RCT of interventions to improve the uptake of glaucoma testing. © Queen's Printer and Controller of HMSO 2007. All rights reserved.
Resumo:
Determination of HER2 protein expression by immunohistochemistry (IHC) and genomic status by fluorescent in situ hybridisation (FISH) are important in identifying a subset of high HER2-expressing gastric cancers that might respond to trastuzumab. Although FISH is considered the standard for determination of HER2 genomic status, brightfield ISH is being increasingly recognised as a viable alternative. Also, the impact of HER2 protein expression/genomic heterogeneity on the accuracy of HER2 testing has not been well studied in the context of gastric biopsy samples.
Resumo:
Aim: This paper is a review protocol that will be used to identify, critically appraise and synthesize the best current evidence relating to the use of online learning and blended learning approaches in teaching clinical skills in undergraduate nursing.
Background: Although previous systematic reviews on online learning versus face to face learning have been undertaken (Cavanaugh et al. 2010, Cook et al. 2010), a systematic review on the impact of online learning and blended learning for teaching clinical skills has yet to be considered in undergraduate nursing. By reviewing nursing students’ online learning experiences, systems can potentially be designed to ensure all students’ are supported appropriately to meet their learning needs.
Methods/Design: The key objectives of the review are to evaluate how online-learning teaching strategies assist nursing students learn; to evaluate the students satisfaction with this form of teaching; to explore the variety of online-learning strategies used; to determine what online-learning strategies are more effective and to determine if supplementary face to face instruction enhances learning. A search of the following databases will be made MEDLINE, CINAHL, BREI, ERIC and AUEI. This review will follow the Joanna Briggs Institute guidance for systematic reviews of quantitative and qualitative research.
Conclusion: This review intends to report on a combination of student experience and learning outcomes therefore increasing its utility for educators and curriculum developers involved in healthcare education.