970 resultados para CLINICAL MEASUREMENT
Resumo:
Introduction Commercial treatment planning systems employ a variety of dose calculation algorithms to plan and predict the dose distributions a patient receives during external beam radiation therapy. Traditionally, the Radiological Physics Center has relied on measurements to assure that institutions participating in the National Cancer Institute sponsored clinical trials administer radiation in doses that are clinically comparable to those of other participating institutions. To complement the effort of the RPC, an independent dose calculation tool needs to be developed that will enable a generic method to determine patient dose distributions in three dimensions and to perform retrospective analysis of radiation delivered to patients who enrolled in past clinical trials. Methods A multi-source model representing output for Varian 6 MV and 10 MV photon beams was developed and evaluated. The Monte Carlo algorithm, know as the Dose Planning Method (DPM), was used to perform the dose calculations. The dose calculations were compared to measurements made in a water phantom and in anthropomorphic phantoms. Intensity modulated radiation therapy and stereotactic body radiation therapy techniques were used with the anthropomorphic phantoms. Finally, past patient treatment plans were selected and recalculated using DPM and contrasted against a commercial dose calculation algorithm. Results The multi-source model was validated for the Varian 6 MV and 10 MV photon beams. The benchmark evaluations demonstrated the ability of the model to accurately calculate dose for the Varian 6 MV and the Varian 10 MV source models. The patient calculations proved that the model was reproducible in determining dose under similar conditions described by the benchmark tests. Conclusions The dose calculation tool that relied on a multi-source model approach and used the DPM code to calculate dose was developed, validated, and benchmarked for the Varian 6 MV and 10 MV photon beams. Several patient dose distributions were contrasted against a commercial algorithm to provide a proof of principal to use as an application in monitoring clinical trial activity.
Resumo:
Purpose. The aim of this research was to evaluate the effect of enteral feeding on tonometric measurement of gastric regional carbon dioxide levels (PrCO2) in normal healthy volunteers. Design and methods. The sample included 12 healthy volunteers recruited by the University Clinical Research Center (UCRC). An air tonometry system monitored PrCO2 levels using a tonometer placed in the lumen of the stomach via orogastric intubation. PrCO2 was automatically measured and recorded every 10 minutes throughout the five hour study period. An oral dose of famotidine 40 mg was self-administered the evening prior to and the morning of the study. Instillation of Isocal® High Nitrogen (HN) was used for enteral feeding in hourly escalating doses of 0, 40, 60, and 80 ml/hr with no feeding during the fifth hour. Results . PrCO2 measurements at time 0 and 10 minutes (41.4 ± 6.5 and 41.8 ± 5.7, respectively) demonstrated biologic precision (Levene's Test statistic = 0.085, p-value 0.774). Biologic precision was lost between T130 and T140 40 when compared to baseline TO (Levene's Test statistic = 1.70, p-value 0.205; and 3.205, p-value 0.042, respectively) and returned to non-significant levels between T270 and T280 (Levene's Test statistic = 3.083, p-value 0.043; and 2.307, p-value 0.143, respectively). Isocal® HN significantly affected the biologic accuracy of PrCO2 measurements (repeated measures ANOVA F 4.91, p-value <0.001). After 20 minutes of enteral feeding at 40 ml/hr, PrCO2 significantly increased (41.4 ± 6.5 to 46.6 ± 4.25, F = 5.4, p-value 0.029). Maximum variance from baseline (41.4 ± 6.5 to 61.3 ± 15.2, F = 17.22, p-value <0.001) was noted after 30 minutes of Isocal® HN at 80 ml/hr or 210 minutes from baseline. The significant elevations in PrCO2 continued throughout the study. Sixty minutes after discontinuation of enteral feeding, PrCO2 remained significantly elevated from baseline (41.4 ± 6.5 to 51.8 ± 9.2, F = 10.15, p-value 0.004). Conclusion. Enteral feeding with Isocal® HN significantly affects the precision and accuracy of PrCO2 measurements in healthy volunteers. ^
Resumo:
Multiple sclerosis (MS) is a chronic disease with an inflammatory and neurodegenerative pathology. Axonal loss and neurodegeneration occurs early in the disease course and may lead to irreversible neurological impairment. Changes in brain volume, observed from the earliest stage of MS and proceeding throughout the disease course, may be an accurate measure of neurodegeneration and tissue damage. There are a number of magnetic resonance imaging-based methods for determining global or regional brain volume, including cross-sectional (e.g. brain parenchymal fraction) and longitudinal techniques (e.g. SIENA [Structural Image Evaluation using Normalization of Atrophy]). Although these methods are sensitive and reproducible, caution must be exercised when interpreting brain volume data, as numerous factors (e.g. pseudoatrophy) may have a confounding effect on measurements, especially in a disease with complex pathological substrates such as MS. Brain volume loss has been correlated with disability progression and cognitive impairment in MS, with the loss of grey matter volume more closely correlated with clinical measures than loss of white matter volume. Preventing brain volume loss may therefore have important clinical implications affecting treatment decisions, with several clinical trials now demonstrating an effect of disease-modifying treatments (DMTs) on reducing brain volume loss. In clinical practice, it may therefore be important to consider the potential impact of a therapy on reducing the rate of brain volume loss. This article reviews the measurement of brain volume in clinical trials and practice, the effect of DMTs on brain volume change across trials and the clinical relevance of brain volume loss in MS.
Resumo:
X-ray imaging is one of the most commonly used medical imaging modality. Albeit X-ray radiographs provide important clinical information for diagnosis, planning and post-operative follow-up, the challenging interpretation due to its 2D projection characteristics and the unknown magnification factor constrain the full benefit of X-ray imaging. In order to overcome these drawbacks, we proposed here an easy-to-use X-ray calibration object and developed an optimization method to robustly find correspondences between the 3D fiducials of the calibration object and their 2D projections. In this work we present all the details of this outlined concept. Moreover, we demonstrate the potential of using such a method to precisely extract information from calibrated X-ray radiographs for two different orthopedic applications: post-operative acetabular cup implant orientation measurement and 3D vertebral body displacement measurement during preoperative traction tests. In the first application, we have achieved a clinically acceptable accuracy of below 1° for both anteversion and inclination angles, where in the second application an average displacement of 8.06±3.71 mm was measured. The results of both applications indicate the importance of using X-ray calibration in the clinical routine.
Resumo:
Reduced bone stock can result in fractures that mostly occur in the spine, distal radius, and proximal femur. In case of operative treatment, osteoporosis is associated with an increased failure rate. To estimate implant anchorage, mechanical methods seem to be promising to measure bone strength intraoperatively. It has been shown that the mechanical peak torque correlates with the local bone mineral density and screw failure load in hip, hindfoot, humerus, and spine in vitro. One device to measure mechanical peak torque is the DensiProbe (AO Research Institute, Davos, Switzerland). The device has shown its effectiveness in mechanical peak torque measurement in mechanical testing setups for the use in hip, hindfoot, and spine. In all studies, the correlation of mechanical torque measurement and local bone mineral density and screw failure load could be shown. It allows the surgeon to judge local bone strength intraoperatively directly at the region of interest and gives valuable information if additional augmentation is needed. We summarize methods of this new technique, its advantages and limitations, and give an overview of actual and possible future applications.
Resumo:
Over the last 20 years, health literacy (German: Gesundheitskompetenz/health competency) has become a popular concept in research and health policy. Initially defined as an individual's ability to understand medical information, the definition has quickly expanded to describe individual-based resources for actions or conduct relevant to health, in different socio-cultural or clinical contexts. Today, researchers and practice experts can draw on a wide variety of definitions and measurements. This article provides an overview of the definitions, briefly introduces the "structure and agency" approach as an example of theorizing health literacy, and shows different types of operationalization. The article presents the strengths and shortcomings of the available concepts and measures and provides starting points for future research in public health and health promotion.
Resumo:
REASONS FOR PERFORMING STUDY: The diagnosis of equine back disorders is challenging. Objectively determining movement of the vertebral column may therefore be of value in a clinical setting. OBJECTIVES: To establish whether surface-mounted inertial measurement units (IMUs) can be used to establish normal values for range of motion (ROM) of the vertebral column in a uniform population of horses trotting under different conditions. STUDY DESIGN: Vertebral ROM was established in Franches-Montagnes stallions and a general population of horses and the variability in measurements compared between the two groups. Repeatability and the influence of specific exercise condition (on ROM) were assessed. Finally, attempts were made to explain the findings of the study through the evaluation of factors that might influence ROM. METHODS: Dorsoventral (DV) and mediolateral (ML) vertebral ROM was measured at a trot under different exercise conditions in 27 Franches-Montagnes stallions and six general population horses using IMUs distributed over the vertebral column. RESULTS: Variability in the ROM measurements was significantly higher for general population horses than for Franches-Montagnes stallions (both DV and ML ROM). Repeatability was strong to very strong for DV measurements and moderate for ML measurements. Trotting under saddle significantly reduced the ROM, with sitting trot resulting in a significantly lower ROM than rising trot. Age is unlikely to explain the low variability in vertebral ROM recorded in the Franches-Montagnes horses, while this may be associated with conformational factors. CONCLUSIONS: It was possible to establish a normal vertebral ROM for a group of Franches-Montagnes stallions. While within-breed variation was low in this population, further studies are necessary to determine variation in vertebral ROM for other breeds and to assess their utility for diagnosis of equine back disorders.
Resumo:
Objective: Since 2011, the new national final examination in human medicine has been implemented in Switzerland, with a structured clinical-practical part in the OSCE format. From the perspective of the national Working Group, the current article describes the essential steps in the development, implementation and evaluation of the Federal Licensing Examination Clinical Skills (FLE CS) as well as the applied quality assurance measures. Finally, central insights gained from the last years are presented. Methods: Based on the principles of action research, the FLE CS is in a constant state of further development. On the foundation of systematically documented experiences from previous years, in the Working Group, unresolved questions are discussed and resulting solution approaches are substantiated (planning), implemented in the examination (implementation) and subsequently evaluated (reflection). The presented results are the product of this iterative procedure. Results: The FLE CS is created by experts from all faculties and subject areas in a multistage process. The examination is administered in German and French on a decentralised basis and consists of twelve interdisciplinary stations per candidate. As important quality assurance measures, the national Review Board (content validation) and the meetings of the standardised patient trainers (standardisation) have proven worthwhile. The statistical analyses show good measurement reliability and support the construct validity of the examination. Among the central insights of the past years, it has been established that the consistent implementation of the principles of action research contributes to the successful further development of the examination. Conclusion: The centrally coordinated, collaborative-iterative process, incorporating experts from all faculties, makes a fundamental contribution to the quality of the FLE CS. The processes and insights presented here can be useful for others planning a similar undertaking. Keywords: national final examination, licensing examination, summative assessment, OSCE, action research
Resumo:
BACKGROUND Canine S100 calcium-binding protein A12 (cS100A12) shows promise as biomarker of inflammation in dogs. A previously developed cS100A12-radioimmunoassay (RIA) requires radioactive tracers and is not sensitive enough for fecal cS100A12 concentrations in 79% of tested healthy dogs. An ELISA assay may be more sensitive than RIA and does not require radioactive tracers. OBJECTIVE The purpose of the study was to establish a sandwich ELISA for serum and fecal cS100A12, and to establish reference intervals (RI) for normal healthy canine serum and feces. METHODS Polyclonal rabbit anti-cS100A12 antibodies were generated and tested by Western blotting and immunohistochemistry. A sandwich ELISA was developed and validated, including accuracy and precision, and agreement with cS100A12-RIA. The RI, stability, and biologic variation in fecal cS100A12, and the effect of corticosteroids on serum cS100A12 were evaluated. RESULTS Lower detection limits were 5 μg/L (serum) and 1 ng/g (fecal), respectively. Intra- and inter-assay coefficients of variation were ≤ 4.4% and ≤ 10.9%, respectively. Observed-to-expected ratios for linearity and spiking recovery were 98.2 ± 9.8% (mean ± SD) and 93.0 ± 6.1%, respectively. There was a significant bias between the ELISA and the RIA. The RI was 49-320 μg/L for serum and 2-484 ng/g for fecal cS100A12. Fecal cS100A12 was stable for 7 days at 23, 4, -20, and -80°C; biologic variation was negligible but variation within one fecal sample was significant. Corticosteroid treatment had no clinically significant effect on serum cS100A12 concentrations. CONCLUSIONS The cS100A12-ELISA is a precise and accurate assay for serum and fecal cS100A12 in dogs.
Resumo:
With the development of the water calorimeter direct measurement of absorbed dose in water becomes possible. This could lead to the establishment of an absorbed dose rather than an exposure related standard for ionization chambers for high energy electrons and photons. In changing to an absorbed dose standard it is necessary to investigate the effect of different parameters, among which are the energy dependence, the air volume, wall thickness and material of the chamber. The effect of these parameters is experimentally studied and presented for several commercially available chambers and one experimental chamber, for photons up to 25 MV and electrons up to 20 MeV, using a water calorimeter as the absorbed dose standard and the most recent formalism to calculate the absorbed dose with ion chambers.^ For electron beams, the dose measured with the calorimeter was 1% lower than the dose calculated with the chambers, independent of beam energy and chamber.^ For photon beams, the absorbed dose measured with the calorimeter was 3.8% higher than the absorbed dose calculated from the chamber readings. Such differences were found to be chamber and energy independent.^ The results for the photons were found to be statistically different from the results with the electron beams. Such difference could not be attributed to a difference in the calorimeter response. ^
Resumo:
Clinical oncologists and cancer researchers benefit from information on the vascularization or non-vascularization of solid tumors because of blood flow's influence on three popular treatment types: hyperthermia therapy, radiotherapy, and chemotherapy. The objective of this research is the development of a clinically useful tumor blood flow measurement technique. The designed technique is sensitive, has good spatial resolution, in non-invasive and presents no risk to the patient beyond his usual treatment (measurements will be subsequent only to normal patient treatment).^ Tumor blood flow was determined by measuring the washout of positron emitting isotopes created through neutron therapy treatment. In order to do this, several technical and scientific questions were addressed first. These questions were: (1) What isotopes are created in tumor tissue when it is irradiated in a neutron therapy beam and how much of each isotope is expected? (2) What are the chemical states of the isotopes that are potentially useful for blood flow measurements and will those chemical states allow these or other isotopes to be washed out of the tumor? (3) How should isotope washout by blood flow be modeled in order to most effectively use the data? These questions have been answered through both theoretical calculation and measurement.^ The first question was answered through the measurement of macroscopic cross sections for the predominant nuclear reactions in the body. These results correlate well with an independent mathematical prediction of tissue activation and measurements of mouse spleen neutron activation. The second question was addressed by performing cell suspension and protein precipitation techniques on neutron activated mouse spleens. The third and final question was answered by using first physical principles to develop a model mimicking the blood flow system and measurement technique.^ In a final set of experiments, the above were applied to flow models and animals. The ultimate aim of this project is to apply its methodology to neutron therapy patients. ^
Resumo:
Arterial spin labeling (ASL) is a technique for noninvasively measuring cerebral perfusion using magnetic resonance imaging. Clinical applications of ASL include functional activation studies, evaluation of the effect of pharmaceuticals on perfusion, and assessment of cerebrovascular disease, stroke, and brain tumor. The use of ASL in the clinic has been limited by poor image quality when large anatomic coverage is required and the time required for data acquisition and processing. This research sought to address these difficulties by optimizing the ASL acquisition and processing schemes. To improve data acquisition, optimal acquisition parameters were determined through simulations, phantom studies and in vivo measurements. The scan time for ASL data acquisition was limited to fifteen minutes to reduce potential subject motion. A processing scheme was implemented that rapidly produced regional cerebral blood flow (rCBF) maps with minimal user input. To provide a measure of the precision of the rCBF values produced by ASL, bootstrap analysis was performed on a representative data set. The bootstrap analysis of single gray and white matter voxels yielded a coefficient of variation of 6.7% and 29% respectively, implying that the calculated rCBF value is far more precise for gray matter than white matter. Additionally, bootstrap analysis was performed to investigate the sensitivity of the rCBF data to the input parameters and provide a quantitative comparison of several existing perfusion models. This study guided the selection of the optimum perfusion quantification model for further experiments. The optimized ASL acquisition and processing schemes were evaluated with two ASL acquisitions on each of five normal subjects. The gray-to-white matter rCBF ratios for nine of the ten acquisitions were within ±10% of 2.6 and none were statistically different from 2.6, the typical ratio produced by a variety of quantitative perfusion techniques. Overall, this work produced an ASL data acquisition and processing technique for quantitative perfusion and functional activation studies, while revealing the limitations of the technique through bootstrap analysis. ^
Resumo:
In a large health care system, the importance of accurate information as feedback mechanisms about its performance is necessary on many levels from the senior level management to service level managers for valid decision-making purposes. The implementation of dashboards is one way to remedy the problem of data overload by providing up-to-date, accurate, and concise information. As this health care system seeks to have an organized, systematic review mechanism in place, dashboards are being created in a variety of the hospital service departments to monitor performance indicators. The Infection Control Administration of this health care system is one that does not currently utilize a dashboard but seeks to implement one. ^ The purpose of this project is to research and design a clinical dashboard for the Infection Control Administration. The intent is that the implementation and usefulness of the clinical dashboard translates into improvement in the measurement of health care quality.^
Resumo:
Two studies among college students were conducted to evaluate appropriate measurement methods for etiological research on computing-related upper extremity musculoskeletal disorders (UEMSDs). ^ A cross-sectional study among 100 graduate students evaluated the utility of symptoms surveys (a VAS scale and 5-point Likert scale) compared with two UEMSD clinical classification systems (Gerr and Moore protocols). The two symptom measures were highly concordant (Lin's rho = 0.54; Spearman's r = 0.72); the two clinical protocols were moderately concordant (Cohen's kappa = 0.50). Sensitivity and specificity, endorsed by Youden's J statistic, did not reveal much agreement between the symptoms surveys and clinical examinations. It cannot be concluded self-report symptoms surveys can be used as surrogate for clinical examinations. ^ A pilot repeated measures study conducted among 30 undergraduate students evaluated computing exposure measurement methods. Key findings are: temporal variations in symptoms, the odds of experiencing symptoms increased with every hour of computer use (adjOR = 1.1, p < .10) and every stretch break taken (adjOR = 1.3, p < .10). When measuring posture using the Computer Use Checklist, a positive association with symptoms was observed (adjOR = 1.3, p < 0.10), while measuring posture using a modified Rapid Upper Limb Assessment produced unexpected and inconsistent associations. The findings were inconclusive in identifying an appropriate posture assessment or superior conceptualization of computer use exposure. ^ A cross-sectional study of 166 graduate students evaluated the comparability of graduate students to College Computing & Health surveys administered to undergraduate students. Fifty-five percent reported computing-related pain and functional limitations. Years of computer use in graduate school and number of years in school where weekly computer use was ≥ 10 hours were associated with pain within an hour of computing in logistic regression analyses. The findings are consistent with current literature on both undergraduate and graduate students. ^
Resumo:
Common endpoints can be divided into two categories. One is dichotomous endpoints which take only fixed values (most of the time two values). The other is continuous endpoints which can be any real number between two specified values. Choices of primary endpoints are critical in clinical trials. If we only use dichotomous endpoints, the power could be underestimated. If only continuous endpoints are chosen, we may not obtain expected sample size due to occurrence of some significant clinical events. Combined endpoints are used in clinical trials to give additional power. However, current combined endpoints or composite endpoints in cardiovascular disease clinical trials or most clinical trials are endpoints that combine either dichotomous endpoints (total mortality + total hospitalization), or continuous endpoints (risk score). Our present work applied U-statistic to combine one dichotomous endpoint and one continuous endpoint, which has three different assessments and to calculate the sample size and test the hypothesis to see if there is any treatment effect. It is especially useful when some patients cannot provide the most precise measurement due to medical contraindication or some personal reasons. Results show that this method has greater power then the analysis using continuous endpoints alone. ^