948 resultados para parallel robots,cable driven,underactuated,calibration,sensitivity,accuracy
Resumo:
BACKGROUND: Contrast detection is an important aspect of the assessment of visual function; however, clinical tests evaluate limited spatial frequencies and contrasts. This study validates the accuracy and inter-test repeatability of a swept-frequency near and distance mobile app Aston contrast sensitivity test, which overcomes this limitation compared to traditional charts. METHOD: Twenty subjects wearing their full refractive correction underwent contrast sensitivity testing on the new near application (near app), distance app, CSV-1000 and Pelli-Robson charts with full correction and with vision degraded by 0.8 and 0.2 Bangerter degradation foils. In addition repeated measures using the 0.8 occluding foil were taken. RESULTS: The mobile apps (near more than distance, p = 0.005) recorded a higher contrast sensitivity than printed tests (p < 0.001); however, all charts showed a reduction in measured contrast sensitivity with degradation (p < 0.001) and a similar decrease with increasing spatial frequency (interaction > 0.05). Although the coefficient of repeatability was lowest for the Pelli-Robson charts (0.14 log units), the mobile app charts measured more spatial frequencies, took less time and were more repeatable (near: 0.26 to 0.37 log units; distance: 0.34 to 0.39 log units) than the CSV-1000 (0.30 to 0.93 log units). The duration to complete the CSV-1000 was 124 ± 37 seconds, Pelli-Robson 78 ± 27 seconds, near app 53 ± 15 seconds and distance app 107 ± 36 seconds. CONCLUSIONS: While there were differences between charts in contrast levels measured, the new Aston near and distance apps are valid, repeatable and time-efficient method of assessing contrast sensitivity at multiple spatial frequencies.
Resumo:
Objective: To test the practicality and effectiveness of cheap, ubiquitous, consumer-grade smartphones to discriminate Parkinson’s disease (PD) subjects from healthy controls, using self-administered tests of gait and postural sway. Background: Existing tests for the diagnosis of PD are based on subjective neurological examinations, performed in-clinic. Objective movement symptom severity data, collected using widely-accessible technologies such as smartphones, would enable the remote characterization of PD symptoms based on self-administered, behavioral tests. Smartphones, when backed up by interviews using web-based videoconferencing, could make it feasible for expert neurologists to perform diagnostic testing on large numbers of individuals at low cost. However, to date, the compliance rate of testing using smart-phones has not been assessed. Methods: We conducted a one-month controlled study with twenty participants, comprising 10 PD subjects and 10 controls. All participants were provided identical LG Optimus S smartphones, capable of recording tri-axial acceleration. Using these smartphones, patients conducted self-administered, short (less than 5 minute) controlled gait and postural sway tests. We analyzed a wide range of summary measures of gait and postural sway from the accelerometry data. Using statistical machine learning techniques, we identified discriminating patterns in the summary measures in order to distinguish PD subjects from controls. Results: Compliance was high all 20 participants performed an average of 3.1 tests per day for the duration of the study. Using this test data, we demonstrated cross-validated sensitivity of 98% and specificity of 98% in discriminating PD subjects from healthy controls. Conclusions: Using consumer-grade smartphone accelerometers, it is possible to distinguish PD from healthy controls with high accuracy. Since these smartphones are inexpensive (around $30 each) and easily available, and the tests are highly non-invasive and objective, we envisage that this kind of smartphone-based testing could radically increase the reach and effectiveness of experts in diagnosing PD.
Resumo:
The Intoxilyzer 5000 was tested for calibration curve linearity for ethanol vapor concentration between 0.020 and 0.400g/210L with excellent linearity. Calibration error using reference solutions outside of the allowed concentration range, response to the same ethanol reference solution at different temperatures between 34 and 38$\sp\circ$C, and its response to eleven chemicals, 10 mixtures of two at the time, and one mixture of four chemicals potentially found in human breath have been evaluated. Potential interferents were chosen on the basis of their infrared signatures and the concentration range of solutions corresponding to the non-lethal blood concentration range of various volatile organic compounds reported in the literature. The result of this study indicates that the instrument calibrates with solutions outside the allowed range up to $\pm$10% of target value. Headspace FID dual column GC analysis was used to confirm the concentrations of the solutions. Increasing the temperature of the reference solution from 34 to 38$\sp\circ$C resulted in linear increases in instrument recorded ethanol readings with an average increase of 6.25%/$\sp\circ$C. Of the eleven chemicals studied during this experiment, six, isopropanol, toluene, methyl ethyl ketone, trichloroethylene, acetaldehyde, and methanol could reasonably interfere with the test at non-lethal reported blood concentration ranges, the mixtures of those six chemicals showed linear additive results with a combined effect of as much as a 0.080g/210L reading (Florida's legal limit) without any ethanol present. ^
Resumo:
Despite lake sensitivity to climate change, few Florida paleolimnological studies have focused on changes in hydrology. Evidence from Florida vegetation histories raise questions about long-term hydrologic history of Florida lakes, and a 25-year limnological dataset revealed recent climate-driven effects on Lake Annie. The objectives of this research are (1) to use modern diatom assemblages to develop methods for reconstruction of climatic and anthropogenic change (2) to reconstruct both long-term and recent histories of Lake Annie using diatom microfossils. Paleoenvironmental reconstruction models were developed from diatom assemblages of various habitat types from modern lakes. Plankton and sediment assemblages were similar, but epiphytes were distinct, suggesting differences in sediment delivery from different parts of the lakes. Relationships between a variety of physical and chemical data and the diatoms from each habitat type were explored. Total phosphorus (TP), pH, and color were found to be the most relevant variables for reconstruction, with sediment and epiphyte assemblages having the strongest relationships to those variables, six calibration models were constructed from the combination of these habitat types and environmental variables. Reconstructions utilizing the weighted averaging models in this study may be used to directly reveal TP, color, and pH changes from a sediment record, which might be suggestive of hydrologic change as well. These variables were reconstructed from the diatom record from both a long-term (11,000 year) and short-term (100 year) record and showed an interaction between climate-driven and local land-use impacts on Lake Annie. The long-term record begins with Lake Annie as a wetland, then the lake filled to a high stand around 4000 years ago. A period of relative stability after that point was interrupted near the turn of the last century by subtle changes in diatom communities that indicate acidification. Abrupt changes in the diatom communities around 1970 AD suggest recovery from acidification, but concurrent hydrologic change intensified anthropogenic effects on the lake. Diatom evidence for alkalization and phosphorus loading correspond to changes seen in the limnological record.
Resumo:
Classification procedures, including atmospheric correction satellite images as well as classification performance utilizing calibration and validation at different levels, have been investigated in the context of a coarse land-cover classification scheme for the Pachitea Basin. Two different correction methods were tested against no correction in terms of reflectance correction towards a common response for pseudo-invariant features (PIF). The accuracy of classifications derived from each of the three methods was then assessed in a discriminant analysis using crossvalidation at pixel, polygon, region, and image levels. Results indicate that only regression adjusted images using PIFs show no significant difference between images in any of the bands. A comparison of classifications at different levels suggests though that at pixel, polygon, and region levels the accuracy of the classifications do not significantly differ between corrected and uncorrected images. Spatial patterns of land-cover were analyzed in terms of colonization history, infrastructure, suitability of the land, and landownership. The actual use of the land is driven mainly by the ability to access the land and markets as is obvious in the distribution of land cover as a function of distance to rivers and roads. When considering all rivers and roads a threshold distance at which disproportional agro-pastoral land cover switches from over represented to under represented is at about 1km. Best land use suggestions seem not to affect the choice of land use. Differences in abundance of land cover between watersheds are more prevailing than differences between colonist and indigenous groups.
Resumo:
The first Air Chemistry Observatory at the German Antarctic station Georg von Neumayer (GvN) was operated for 10 years from 1982 to 1991. The focus of the established observational programme was on characterizing the physical properties and chemical composition of the aerosol, as well as on monitoring the changing trace gas composition of the background atmosphere, especially concerning greenhouse gases. The observatory was designed by the Institut für Umweltphysik, University of Heidelberg (UHEIIUP). The experiments were installed inside the bivouac lodge, mounted on a sledge and put upon a snow hill to prevent snow accumulation during blizzards. All experiments were under daily control and daily performance protocols were documented. A ventilated stainless steel inlet stack (total height about 3-4 m above the snow surface) with a 50% aerodynamic cut-off diameter around 7-10 µm at wind velocities between 4-10 m/s supplied all experiments with ambient air. Contamination free sampling was realized by several means: (i) The Air Chemistry Observatory was situated in a clean air area about 1500 m south of GvN. Due to the fact that northern wind directions are very rare, contamination from the base can be excluded for most of the time. (ii) The power supply (20 kW) is provided by a cable from the main station, thus no fuel-driven generator is operated in the very vicinity. (iii) Contamination-free sampling is controlled by the permanently recorded wind velocity, wind direction and by condensation particle concentration. Contamination was indicated if one of the following criteria were given: Wind direction within a 330°-30° sector, wind velocity <2.2 m/s or >17.5 m/s, or condensation particle concentrations >2500/cm**3 during summer, >800/cm**3 during spring/autumn and >400/cm**3 during winter. If one or a definable combination of these criteria were given, high volume aerosol sampling and part of the trace gas sampling were interrupted. Starting at 1982 through 1991-01-14 surface ozone was measured with an electrochemical concentration cell (ECC). Surface ozone mixing ratio are given in ppbv = parts per 10**9 by volume. The averaging time corresponds to the given time intervals in the data sheet. The accuracy of the values are better than ±1 ppbv and the detection limit is around 1.0 ppbv. Aerosols were sampled on two Whatman 541 cellulose filters in series and analyzed by ion chromatography at the UHEI-IUP. Generally, the sampling period was seven days but could be up to two weeks on occasion. The air flow was around 100 m**3/h and typically 10000-20000 m**3 of ambient air was forced through the filters for one sample. Concentration values are given in nanogram (ng) per 1 m**3 air at standard pressure and temperature (1013 mbar, 273.16 K). Uncertainties of the values were approximately ±10% to ±15% for the main components MSA, chloride, nitrate, sulfate and sodium, and between ±20% and ±30% for the minor species bromide, ammonium, potassium, magnesium and calcium.
Resumo:
We present the stellar calibrator sample and the conversion from instrumental to physical units for the 24 μm channel of the Multiband Imaging Photometer for Spitzer (MIPS). The primary calibrators are A stars, and the calibration factor based on those stars is 4.54 × 10^-2 MJy sr^–1 (DN/s)^–1, with a nominal uncertainty of 2%. We discuss the data reduction procedures required to attain this accuracy; without these procedures, the calibration factor obtained using the automated pipeline at the Spitzer Science Center is 1.6% ± 0.6% lower. We extend this work to predict 24 μm flux densities for a sample of 238 stars that covers a larger range of flux densities and spectral types. We present a total of 348 measurements of 141 stars at 24 μm. This sample covers a factor of ~460 in 24 μm flux density, from 8.6 mJy up to 4.0 Jy. We show that the calibration is linear over that range with respect to target flux and background level. The calibration is based on observations made using 3 s exposures; a preliminary analysis shows that the calibration factor may be 1% and 2% lower for 10 and 30 s exposures, respectively. We also demonstrate that the calibration is very stable: over the course of the mission, repeated measurements of our routine calibrator, HD 159330, show a rms scatter of only 0.4%. Finally, we show that the point-spread function (PSF) is well measured and allows us to calibrate extended sources accurately; Infrared Astronomy Satellite (IRAS) and MIPS measurements of a sample of nearby galaxies are identical within the uncertainties.
Resumo:
Current interest in measuring quality of life is generating interest in the construction of computerized adaptive tests (CATs) with Likert-type items. Calibration of an item bank for use in CAT requires collecting responses to a large number of candidate items. However, the number is usually too large to administer to each subject in the calibration sample. The concurrent anchor-item design solves this problem by splitting the items into separate subtests, with some common items across subtests; then administering each subtest to a different sample; and finally running estimation algorithms once on the aggregated data array, from which a substantial number of responses are then missing. Although the use of anchor-item designs is widespread, the consequences of several configuration decisions on the accuracy of parameter estimates have never been studied in the polytomous case. The present study addresses this question by simulation, comparing the outcomes of several alternatives on the configuration of the anchor-item design. The factors defining variants of the anchor-item design are (a) subtest size, (b) balance of common and unique items per subtest, (c) characteristics of the common items, and (d) criteria for the distribution of unique items across subtests. The results of this study indicate that maximizing accuracy in item parameter recovery requires subtests of the largest possible number of items and the smallest possible number of common items; the characteristics of the common items and the criterion for distribution of unique items do not affect accuracy.
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
Magnetic field inhomogeneity results in image artifacts including signal loss, image blurring and distortions, leading to decreased diagnostic accuracy. Conventional multi-coil (MC) shimming method employs both RF coils and shimming coils, whose mutual interference induces a tradeoff between RF signal-to-noise (SNR) ratio and shimming performance. To address this issue, RF coils were integrated with direct-current (DC) shim coils to shim field inhomogeneity while concurrently emitting and receiving RF signal without being blocked by the shim coils. The currents applied to the new coils, termed iPRES (integrated parallel reception, excitation and shimming), were optimized in the numerical simulation to improve the shimming performance. The objectives of this work is to offer a guideline for designing the optimal iPRES coil arrays to shim the abdomen.
In this thesis work, the main field () inhomogeneity was evaluated by root mean square error (RMSE). To investigate the shimming abilities of iPRES coil arrays, a set of the human abdomen MRI data was collected for the numerical simulations. Thereafter, different simplified iPRES(N) coil arrays were numerically modeled, including a 1-channel iPRES coil and 8-channel iPRES coil arrays. For 8-channel iPRES coil arrays, each RF coil was split into smaller DC loops in the x, y and z direction to provide extra shimming freedom. Additionally, the number of DC loops in a RF coil was increased from 1 to 5 to find the optimal divisions in z direction. Furthermore, switches were numerically implemented into iPRES coils to reduce the number of power supplies while still providing similar shimming performance with equivalent iPRES coil arrays.
The optimizations demonstrate that the shimming ability of an iPRES coil array increases with number of DC loops per RF coil. Furthermore, the z direction divisions tend to be more effective in reducing field inhomogeneity than the x and y divisions. Moreover, the shimming performance of an iPRES coil array gradually reach to a saturation level when the number of DC loops per RF coil is large enough. Finally, when switches were numerically implemented in the iPRES(4) coil array, the number of power supplies can be reduced from 32 to 8 while keeping the shimming performance similar to iPRES(3) and better than iPRES(1). This thesis work offers a guidance for the designs of iPRES coil arrays.
Resumo:
Purpose: To develop, evaluate and apply a novel high-resolution 3D remote dosimetry protocol for validation of MRI guided radiation therapy treatments (MRIdian® by ViewRay®). We demonstrate the first application of the protocol (including two small but required new correction terms) utilizing radiochromic 3D plastic PRESAGE® with optical-CT readout.
Methods: A detailed study of PRESAGE® dosimeters (2kg) was conducted to investigate the temporal and spatial stability of radiation induced optical density change (ΔOD) over 8 days. Temporal stability was investigated on 3 dosimeters irradiated with four equally-spaced square 6MV fields delivering doses between 10cGy and 300cGy. Doses were imaged (read-out) by optical-CT at multiple intervals. Spatial stability of ΔOD response was investigated on 3 other dosimeters irradiated uniformly with 15MV extended-SSD fields with doses of 15cGy, 30cGy and 60cGy. Temporal and spatial (radial) changes were investigated using CERR and MATLAB’s Curve Fitting Tool-box. A protocol was developed to extrapolate measured ΔOD readings at t=48hr (the typical shipment time in remote dosimetry) to time t=1hr.
Results: All dosimeters were observed to gradually darken with time (<5% per day). Consistent intra-batch sensitivity (0.0930±0.002 ΔOD/cm/Gy) and linearity (R2=0.9996) was observed at t=1hr. A small radial effect (<3%) was observed, attributed to curing thermodynamics during manufacture. The refined remote dosimetry protocol (including polynomial correction terms for temporal and spatial effects, CT and CR) was then applied to independent dosimeters irradiated with MR-IGRT treatments. Excellent line profile agreement and 3D-gamma results for 3%/3mm, 10% threshold were observed, with an average passing rate 96.5%± 3.43%.
Conclusion: A novel 3D remote dosimetry protocol is presented capable of validation of advanced radiation treatments (including MR-IGRT). The protocol uses 2kg radiochromic plastic dosimeters read-out by optical-CT within a week of treatment. The protocol requires small corrections for temporal and spatially-dependent behaviors observed between irradiation and readout.
Resumo:
Visual inspection with Acetic Acid (VIA) and Visual Inspection with Lugol’s Iodine (VILI) are increasingly recommended in various cervical cancer screening protocols in low-resource settings. Although VIA is more widely used, VILI has been advocated as an easier and more specific screening test. VILI has not been well-validated as a stand-alone screening test, compared to VIA or validated for use in HIV-infected women. We carried out a randomized clinical trial to compare the diagnostic accuracy of VIA and VILI among HIV-infected women. Women attending the Family AIDS Care and Education Services (FACES) clinic in western Kenya were enrolled and randomized to undergo either VIA or VILI with colposcopy. Lesions suspicious for cervical intraepithelial neoplasia 2 or greater (CIN2+) were biopsied. Between October 2011 and June 2012, 654 were randomized to undergo VIA or VILI. The test positivity rates were 26.2% for VIA and 30.6% for VILI (p = 0.22). The rate of detection of CIN2+ was 7.7% in the VIA arm and 11.5% in the VILI arm (p = 0.10). There was no significant difference in the diagnostic performance of VIA and VILI for the detection of CIN2+. Sensitivity and specificity were 84.0% and 78.6%, respectively, for VIA and 84.2% and 76.4% for VILI. The positive and negative predictive values were 24.7% and 98.3% for VIA, and 31.7% and 97.4% for VILI. Among women with CD4+ count < 350, VILI had a significantly decreased specificity (66.2%) compared to VIA in the same group (83.9%, p = 0.02) and compared to VILI performed among women with CD4+ count ≥ 350 (79.7%, p = 0.02). VIA and VILI had similar diagnostic accuracy and rates of CIN2+ detection among HIV-infected women.
Resumo:
Scientists planning to use underwater stereoscopic image technologies are often faced with numerous problems during the methodological implementations: commercial equipment is too expensive; the setup or calibration is too complex; or the imaging processing (i.e. measuring objects in the stereo-images) is too complicated to be performed without a time-consuming phase of training and evaluation. The present paper addresses some of these problems and describes a workflow for stereoscopic measurements for marine biologists. It also provides instructions on how to assemble an underwater stereo-photographic system with two digital consumer cameras and gives step-by-step guidelines for setting up the hardware. The second part details a software procedure to correct stereo-image pairs for lens distortions, which is especially important when using cameras with non-calibrated optical units. The final part presents a guide to the process of measuring the lengths (or distances) of objects in stereoscopic image pairs. To reveal the applicability and the restrictions of the described systems and to test the effects of different types of camera (a compact camera and an SLR type), experiments were performed to determine the precision and accuracy of two generic stereo-imaging units: a diver-operated system based on two Olympus Mju 1030SW compact cameras and a cable-connected observatory system based on two Canon 1100D SLR cameras. In the simplest setup without any correction for lens distortion, the low-budget Olympus Mju 1030SW system achieved mean accuracy errors (percentage deviation of a measurement from the object's real size) between 10.2 and -7.6% (overall mean value: -0.6%), depending on the size, orientation and distance of the measured object from the camera. With the single lens reflex (SLR) system, very similar values between 10.1% and -3.4% (overall mean value: -1.2%) were observed. Correction of the lens distortion significantly improved the mean accuracy errors of either system. Even more, system precision (spread of the accuracy) improved significantly in both systems. Neither the use of a wide-angle converter nor multiple reassembly of the system had a significant negative effect on the results. The study shows that underwater stereophotography, independent of the system, has a high potential for robust and non-destructive in situ sampling and can be used without prior specialist training.
Resumo:
The drag on a nacelle model was investigated experimentally and computationally to provide guidance and insight into the capabilities of RANS-based CFD. The research goal was to determine whether industry constrained CFD could participate in the aerodynamic design of nacelle bodies. Grid refinement level, turbulence model and near wall treatment settings, to predict drag to the highest accuracy, were key deliverables. Cold flow low-speed wind tunnel experiments were conducted at a Reynolds number of 6∙〖10〗^5, 293 K and a Mach number of 0.1. Total drag force was measured by a six-component force balance. Detailed wake analysis, using a seven-hole pressure probe traverse, allowed for drag decomposition via the far-field method. Drag decomposition was performed through a range of angles of attack between 0o and 45o. Both methods agreed on total drag within their respective uncertainties. Reversed flow at the measurement plane and saturation of the load cell caused discrepancies at high angles of attack. A parallel CFD study was conducted using commercial software, ICEM 15.0 and FLUENT 15.0. Simulating a similar nacelle geometry operating under inlet boundary conditions obtained through wind tunnel characterization allowed for direct comparisons with experiment. It was determined that the Realizable k-ϵ was best suited for drag prediction of this geometry. This model predicted the axial momentum loss and secondary flow in the wake, as well as the integrated surface forces, within experimental error up to 20o angle of attack. SST k-ω required additional surface grid resolution on the nacelle suction side, resulting in 15% more elements, due to separation point prediction sensitivity. It was further recommended to apply enhanced wall treatment to more accurately capture the viscous drag and separated flow structures. Overall, total drag was predicted within 5% at 0o angle of attack and 10% at 20o, each within experimental uncertainty. What is more, the form and induced drag predicted by CFD and measured by the wake traverse shared good agreement. Which indicated CFD captured the key flow features accurately despite simplification of the nacelle interior geometry.
Resumo:
OBJECTIVE Cannabidiol (CBD) and D9-tetrahydrocannabivarin (THCV) are nonpsychoactive phytocannabinoids affecting lipid and glucose metabolism in animal models. This study set out to examine the effects of these compounds in patients with type 2 diabetes. RESEARCH DESIGN AND METHODS In this randomized, double-blind, placebo-controlled study, 62 subjects with noninsulin-treated type 2 diabetes were randomized to five treatment arms: CBD (100 mg twice daily), THCV (5 mg twice daily), 1:1 ratio of CBD and THCV (5 mg/5 mg, twice daily), 20:1 ratio of CBD and THCV (100 mg/5 mg, twice daily), or matched placebo for 13 weeks. The primary end point was a change in HDL-cholesterol concentrations from baseline. Secondary/tertiary end points included changes in glycemic control, lipid profile, insulin sensitivity, body weight, liver triglyceride content, adipose tissue distribution, appetite, markers of inflammation, markers of vascular function, gut hormones, circulating endocannabinoids, and adipokine concentrations. Safety and tolerability end points were also evaluated. RESULTS Compared with placebo, THCV significantly decreased fasting plasma glucose (estimated treatment difference [ETD] = 21.2 mmol/L; P < 0.05) and improved pancreatic b-cell function (HOMA2 b-cell function [ETD = 244.51 points; P < 0.01]), adiponectin (ETD = 25.9 3 106 pg/mL; P < 0.01), and apolipoprotein A (ETD = 26.02 mmol/L; P < 0.05), although plasma HDL was unaffected. Compared with baseline (but not placebo), CBD decreased resistin (2898 pg/ml; P < 0.05) and increased glucose-dependent insulinotropic peptide (21.9 pg/ml; P < 0.05). None of the combination treatments had a significant impact on end points. CBD and THCV were well tolerated. CONCLUSIONS THCV could represent a newtherapeutic agent in glycemic control in subjects with type 2 diabetes.