960 resultados para radiological contrast
Resumo:
Background: The Melbourne Edge Test (MET) is a portable forced-choice edge detection contrast sensitivity (CS) test. The original externally illuminated paper test has been superseded by a backlit version. The aim of this study was to establish normative values for age and to assess change with visual impairment. Method: The MET was administered to 168 people with normal vision (18-93 years old) and 93 patients with visual impairment (39-97 years old). Distance visual acuity (VA) was measured with a log MAR chart. Results: In those eyes without disease, MET CS was stable until the age of 50 years (23.8 ± .7 dB) after which it decreased at a rate of ≈1.5 dB per decade. Compared with normative values, people with low vision were found to have significantly reduced CS, which could not be totally accounted for by reduced VA. Conclusions: The MET provides a quick and easy measure of CS, which highlights a reduction in visual function that may not be detectable using VA measurements. © 2004 The College of Optometrists.
Resumo:
Aims: To determine the visual outcome following initiation of brimonidine therapy in glaucoma. Methods: 16 newly diagnosed previously untreated glaucoma patients were randomly assigned to either timalal 0.5% or brimanidine 0.2%. Visual acuity, contrast sensitivity (CS), visual fields, intraocular pressure (IOP), blaad pressure, and heart rate were evaluated at baseline and after 3 months. Results: IOP reduction was similar far both groups (p<0.05). Brimanidine improved CS; in the right eye at 6 and 12 cpd (p = 0.043, p = 0.017); in the left eye at 3 and 12 cpd (p = 0.044, p = 0.046). Timolol reduced CS at 18 cpd in the right eye (p = 0.041). There was no change in any other measured parameters. Conclusion: Glaucoma patients exhibit improved CS an initiation of brimanidine therapy.
Resumo:
BACKGROUND: Contrast detection is an important aspect of the assessment of visual function; however, clinical tests evaluate limited spatial frequencies and contrasts. This study validates the accuracy and inter-test repeatability of a swept-frequency near and distance mobile app Aston contrast sensitivity test, which overcomes this limitation compared to traditional charts. METHOD: Twenty subjects wearing their full refractive correction underwent contrast sensitivity testing on the new near application (near app), distance app, CSV-1000 and Pelli-Robson charts with full correction and with vision degraded by 0.8 and 0.2 Bangerter degradation foils. In addition repeated measures using the 0.8 occluding foil were taken. RESULTS: The mobile apps (near more than distance, p = 0.005) recorded a higher contrast sensitivity than printed tests (p < 0.001); however, all charts showed a reduction in measured contrast sensitivity with degradation (p < 0.001) and a similar decrease with increasing spatial frequency (interaction > 0.05). Although the coefficient of repeatability was lowest for the Pelli-Robson charts (0.14 log units), the mobile app charts measured more spatial frequencies, took less time and were more repeatable (near: 0.26 to 0.37 log units; distance: 0.34 to 0.39 log units) than the CSV-1000 (0.30 to 0.93 log units). The duration to complete the CSV-1000 was 124 ± 37 seconds, Pelli-Robson 78 ± 27 seconds, near app 53 ± 15 seconds and distance app 107 ± 36 seconds. CONCLUSIONS: While there were differences between charts in contrast levels measured, the new Aston near and distance apps are valid, repeatable and time-efficient method of assessing contrast sensitivity at multiple spatial frequencies.
Resumo:
Binocular combination for first-order (luminancedefined) stimuli has been widely studied, but we know rather little about this binocular process for spatial modulations of contrast (second-order stimuli). We used phase-matching and amplitude-matching tasks to assess binocular combination of second-order phase and modulation depth simultaneously. With fixed modulation in one eye, we found that binocularly perceived phase was shifted, and perceived amplitude increased almost linearly as modulation depth in the other eye increased. At larger disparities, the phase shift was larger and the amplitude change was smaller. The degree of interocular correlation of the carriers had no influence. These results can be explained by an initial extraction of the contrast envelopes before binocular combination (consistent with the lack of dependence on carrier correlation) followed by a weighted linear summation of second-order modulations in which the weights (gains) for each eye are driven by the first-order carrier contrasts as previously found for first-order binocular combination. Perceived modulation depth fell markedly with increasing phase disparity unlike previous findings that perceived first-order contrast was almost independent of phase disparity. We present a simple revision to a widely used interocular gain-control theory that unifies first- and second-order binocular summation with a single principle-contrast-weighted summation-and we further elaborate the model for first-order combination. Conclusion: Second-order combination is controlled by first-order contrast.
Resumo:
The study reports an advance in designing copper-based redox sensing MRI contrast agents. Although the data demonstrate that copper(II) complexes are not able to compete with lanthanoids species in terms of contrast, the redox-dependent switch between diamagnetic copper(I) and paramagnetic copper(II) yields a novel redox-sensitive contrast moiety with potential for reversibility.
Resumo:
Measurements of area summation for luminance-modulated stimuli are typically confounded by variations in sensitivity across the retina. Recently we conducted a detailed analysis of sensitivity across the visual field (Baldwin et al, 2012) and found it to be well-described by a bilinear “witch’s hat” function: sensitivity declines rapidly over the first 8 cycles or so, more gently thereafter. Here we multiplied luminance-modulated stimuli (4 c/deg gratings and “Swiss cheeses”) by the inverse of the witch’s hat function to compensate for the inhomogeneity. This revealed summation functions that were straight lines (on double log axes) with a slope of -1/4 extending to ≥33 cycles, demonstrating fourth-root summation of contrast over a wider area than has previously been reported for the central retina. Fourth-root summation is typically attributed to probability summation, but recent studies have rejected that interpretation in favour of a noisy energy model that performs local square-law transduction of the signal, adds noise at each location of the target and then sums over signal area. Modelling shows our results to be consistent with a wide field application of such a contrast integrator. We reject a probability summation model, a quadratic model and a matched template model of our results under the assumptions of signal detection theory. We also reject the high threshold theory of contrast detection under the assumption of probability summation over area.
Resumo:
How are the image statistics of global image contrast computed? We answered this by using a contrast-matching task for checkerboard configurations of ‘battenberg’ micro-patterns where the contrasts and spatial spreads of interdigitated pairs of micro-patterns were adjusted independently. Test stimuli were 20 × 20 arrays with various sized cluster widths, matched to standard patterns of uniform contrast. When one of the test patterns contained a pattern with much higher contrast than the other, that determined global pattern contrast, as in a max() operation. Crucially, however, the full matching functions had a curious intermediate region where low contrast additions for one pattern to intermediate contrasts of the other caused a paradoxical reduction in perceived global contrast. None of the following models predicted this: RMS, energy, linear sum, max, Legge and Foley. However, a gain control model incorporating wide-field integration and suppression of nonlinear contrast responses predicted the results with no free parameters. This model was derived from experiments on summation of contrast at threshold, and masking and summation effects in dipper functions. Those experiments were also inconsistent with the failed models above. Thus, we conclude that our contrast gain control model (Meese & Summers, 2007) describes a fundamental operation in human contrast vision.
Resumo:
Previous work has shown that human vision performs spatial integration of luminance contrast energy, where signals are squared and summed (with internal noise) over area at detection threshold. We tested that model here in an experiment using arrays of micro-pattern textures that varied in overall stimulus area and sparseness of their target elements, where the contrast of each element was normalised for sensitivity across the visual field. We found a power-law improvement in performance with stimulus area, and a decrease in sensitivity with sparseness. While the contrast integrator model performed well when target elements constituted 50–100% of the target area (replicating previous results), observers outperformed the model when texture elements were sparser than this. This result required the inclusion of further templates in our model, selective for grids of various regular texture densities. By assuming a MAX operation across these noisy mechanisms the model also accounted for the increase in the slope of the psychometric function that occurred as texture density decreased. Thus, for the first time, mechanisms that are selective for texture density have been revealed at contrast detection threshold. We suggest that these mechanisms have a role to play in the perception of visual textures.
Resumo:
The improvement in living standards and the development of telecommunications have led to a large increase in the number of Internet users in China. It has been reported by China National Network Information Center that the number of Internet users in China has reached 33.7 million in 2001, ranting the country third in the world. This figure also shows that more and more Chinese residents have accepted the Internet and use it to obtain information and compete their travel planning. Milne and Ateljevic stated that the integration of computing and telecommunications would create a global information network based mostly on the Internet. The Internet, especially the World Wide Web, has had a great impact on the hospitality and tourism industry in recent years. The WWW plays an important role in mediating between customers and hotel companies as a place to acquire information acquisition and transact business.
Resumo:
Trials in a temporal two-interval forced-choice discrimination experiment consist of two sequential intervals presenting stimuli that differ from one another as to magnitude along some continuum. The observer must report in which interval the stimulus had a larger magnitude. The standard difference model from signal detection theory analyses poses that order of presentation should not affect the results of the comparison, something known as the balance condition (J.-C. Falmagne, 1985, in Elements of Psychophysical Theory). But empirical data prove otherwise and consistently reveal what Fechner (1860/1966, in Elements of Psychophysics) called time-order errors, whereby the magnitude of the stimulus presented in one of the intervals is systematically underestimated relative to the other. Here we discuss sensory factors (temporary desensitization) and procedural glitches (short interstimulus or intertrial intervals and response bias) that might explain the time-order error, and we derive a formal model indicating how these factors make observed performance vary with presentation order despite a single underlying mechanism. Experimental results are also presented illustrating the conventional failure of the balance condition and testing the hypothesis that time-order errors result from contamination by the factors included in the model.
Resumo:
Recent discussion regarding whether the noise that limits 2AFC discrimination performance is fixed or variable has focused either on describing experimental methods that presumably dissociate the effects of response mean and variance or on reanalyzing a published data set with the aim of determining how to solve the question through goodness-of-fit statistics. This paper illustrates that the question cannot be solved by fitting models to data and assessing goodness-of-fit because data on detection and discrimination performance can be indistinguishably fitted by models that assume either type of noise when each is coupled with a convenient form for the transducer function. Thus, success or failure at fitting a transducer model merely illustrates the capability (or lack thereof) of some particular combination of transducer function and variance function to account for the data, but it cannot disclose the nature of the noise. We also comment on some of the issues that have been raised in recent exchange on the topic, namely, the existence of additional constraints for the models, the presence of asymmetric asymptotes, the likelihood of history-dependent noise, and the potential of certain experimental methods to dissociate the effects of response mean and variance.
Resumo:
The transducer function mu for contrast perception describes the nonlinear mapping of stimulus contrast onto an internal response. Under a signal detection theory approach, the transducer model of contrast perception states that the internal response elicited by a stimulus of contrast c is a random variable with mean mu(c). Using this approach, we derive the formal relations between the transducer function, the threshold-versus-contrast (TvC) function, and the psychometric functions for contrast detection and discrimination in 2AFC tasks. We show that the mathematical form of the TvC function is determined only by mu, and that the psychometric functions for detection and discrimination have a common mathematical form with common parameters emanating from, and only from, the transducer function mu and the form of the distribution of the internal responses. We discuss the theoretical and practical implications of these relations, which have bearings on the tenability of certain mathematical forms for the psychometric function and on the suitability of empirical approaches to model validation. We also present the results of a comprehensive test of these relations using two alternative forms of the transducer model: a three-parameter version that renders logistic psychometric functions and a five-parameter version using Foley's variant of the Naka-Rushton equation as transducer function. Our results support the validity of the formal relations implied by the general transducer model, and the two versions that were contrasted account for our data equally well.
Resumo:
Magnetic resonance imaging is a research and clinical tool that has been applied in a wide variety of sciences. One area of magnetic resonance imaging that has exhibited terrific promise and growth in the past decade is magnetic susceptibility imaging. Imaging tissue susceptibility provides insight into the microstructural organization and chemical properties of biological tissues, but this image contrast is not well understood. The purpose of this work is to develop effective approaches to image, assess, and model the mechanisms that generate both isotropic and anisotropic magnetic susceptibility contrast in biological tissues, including myocardium and central nervous system white matter.
This document contains the first report of MRI-measured susceptibility anisotropy in myocardium. Intact mouse heart specimens were scanned using MRI at 9.4 T to ascertain both the magnetic susceptibility and myofiber orientation of the tissue. The susceptibility anisotropy of myocardium was observed and measured by relating the apparent tissue susceptibility as a function of the myofiber angle with respect to the applied magnetic field. A multi-filament model of myocardial tissue revealed that the diamagnetically anisotropy α-helix peptide bonds in myofilament proteins are capable of producing bulk susceptibility anisotropy on a scale measurable by MRI, and are potentially the chief sources of the experimentally observed anisotropy.
The growing use of paramagnetic contrast agents in magnetic susceptibility imaging motivated a series of investigations regarding the effect of these exogenous agents on susceptibility imaging in the brain, heart, and kidney. In each of these organs, gadolinium increases susceptibility contrast and anisotropy, though the enhancements depend on the tissue type, compartmentalization of contrast agent, and complex multi-pool relaxation. In the brain, the introduction of paramagnetic contrast agents actually makes white matter tissue regions appear more diamagnetic relative to the reference susceptibility. Gadolinium-enhanced MRI yields tensor-valued susceptibility images with eigenvectors that more accurately reflect the underlying tissue orientation.
Despite the boost gadolinium provides, tensor-valued susceptibility image reconstruction is prone to image artifacts. A novel algorithm was developed to mitigate these artifacts by incorporating orientation-dependent tissue relaxation information into susceptibility tensor estimation. The technique was verified using a numerical phantom simulation, and improves susceptibility-based tractography in the brain, kidney, and heart. This work represents the first successful application of susceptibility-based tractography to a whole, intact heart.
The knowledge and tools developed throughout the course of this research were then applied to studying mouse models of Alzheimer’s disease in vivo, and studying hypertrophic human myocardium specimens ex vivo. Though a preliminary study using contrast-enhanced quantitative susceptibility mapping has revealed diamagnetic amyloid plaques associated with Alzheimer’s disease in the mouse brain ex vivo, non-contrast susceptibility imaging was unable to precisely identify these plaques in vivo. Susceptibility tensor imaging of human myocardium specimens at 9.4 T shows that susceptibility anisotropy is larger and mean susceptibility is more diamagnetic in hypertrophic tissue than in normal tissue. These findings support the hypothesis that myofilament proteins are a source of susceptibility contrast and anisotropy in myocardium. This collection of preclinical studies provides new tools and context for analyzing tissue structure, chemistry, and health in a variety of organs throughout the body.
Resumo:
It is apparent that most of the techniques that make use of ionising radiation in human medical practices are now being applied in veterinary medicine. Steps are being taken by the IAEA to provide guidance for humans involved in such practices, but there appears to be no international initiative that considers the protection or welfare of the animal as a patient. There is therefore a risk that the deliberate exposure of an animal, particularly in the therapeutic application of radiation, could do more harm than good. In the light of recent developments in dosimetric modelling and the application of known effects of radiation on different types of animals, for the purposes of the protection of biota in an environmental context, it is argued that it would be sensible now to start a serious consideration of this issue. Some suggestions are made with regard to a number of areas that could be considered further, both specifically and with regard to the field of radiological protection as a whole.