861 resultados para Diagnostic imaging Digital techniques
Resumo:
The paper describes a procedure for accurately and speedily calibrating tanks used for the chemical processing of nuclear materials. The procedure features the use of (1) precalibrated vessels certified to deliver known volumes of liquid, (2) calibrated linear measuring devices, and (3) a digital computer for manipulating data and producing printed calibration information. Calibration records of the standards are traceable to primary standards. Logic is incorporated in the computer program to accomplish curve fitting and perform the tests to accept or to reject the calibration, based on statistical, empirical, and report requirements. This logic is believed to be unique.
Resumo:
The international perspectives on these issues are especially valuable in an increasingly connected, but still institutionally and administratively diverse world. The research addressed in several chapters in this volume includes issues around technical standards bodies like EpiDoc and the TEI, engaging with ways these standards are implemented, documented, taught, used in the process of transcribing and annotating texts, and used to generate publications and as the basis for advanced textual or corpus research. Other chapters focus on various aspects of philological research and content creation, including collaborative or community driven efforts, and the issues surrounding editorial oversight, curation, maintenance and sustainability of these resources. Research into the ancient languages and linguistics, in particular Greek, and the language teaching that is a staple of our discipline, are also discussed in several chapters, in particular for ways in which advanced research methods can lead into language technologies and vice versa and ways in which the skills around teaching can be used for public engagement, and vice versa. A common thread through much of the volume is the importance of open access publication or open source development and distribution of texts, materials, tools and standards, both because of the public good provided by such models (circulating materials often already paid for out of the public purse), and the ability to reach non-standard audiences, those who cannot access rich university libraries or afford expensive print volumes. Linked Open Data is another technology that results in wide and free distribution of structured information both within and outside academic circles, and several chapters present academic work that includes ontologies and RDF, either as a direct research output or as essential part of the communication and knowledge representation. Several chapters focus not on the literary and philological side of classics, but on the study of cultural heritage, archaeology, and the material supports on which original textual and artistic material are engraved or otherwise inscribed, addressing both the capture and analysis of artefacts in both 2D and 3D, the representation of data through archaeological standards, and the importance of sharing information and expertise between the several domains both within and without academia that study, record and conserve ancient objects. Almost without exception, the authors reflect on the issues of interdisciplinarity and collaboration, the relationship between their research practice and teaching and/or communication with a wider public, and the importance of the role of the academic researcher in contemporary society and in the context of cutting edge technologies. How research is communicated in a world of instant- access blogging and 140-character micromessaging, and how our expectations of the media affect not only how we publish but how we conduct our research, are questions about which all scholars need to be aware and self-critical.
Resumo:
Mode of access: Internet.
Resumo:
This communication reports a laboratory and plant comparison between the University of Cape Town (UCT) device (capillary) and the McGill University bubble sizing method (imaging). The laboratory work was conducted on single bubbles to establish the accuracy of the techniques by comparing with a reference method (capture in a burette). Single bubble measurements with the McGill University technique showed a tendency to slightly underestimate (4% for a 1.3 mm bubble) and the UCT technique to slightly overestimate (1% for the 1.3 man bubble). Both trends are anticipated from fundamental considerations. In the UCT technique bubble breakup was observed when measuring a 2.7 mm bubble using a 0.5 mm ID capillary tube. A discrepancy of 11% was determined when comparing the techniques in an industrial-scale mechanical flotation cell. The possible sources of bias are discussed. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
An approach reported recently by Alexandrov et al (2005 Int. J Imag. Syst. Technol. 14 253-8) on optical scatter imaging, termed digital Fourier microscopy (DFM), represents an adaptation of digital Fourier holography to selective imaging of biological matter. The holographic mode of the recording of the sample optical scatter enables reconstruction of the sample image. The form-factor of the sample constituents provides a basis for discrimination of these constituents implemented via flexible digital Fourier filtering at the post-processing stage. As in dark-field microscopy, the DFM image contrast appears to improve due to the suppressed optical scatter from extended sample structures. In this paper, we present the theoretical and experimental study of DFM using a biological phantom that contains polymorphic scatterers.
Resumo:
Large areas of tropical sub- and inter-tidal seagrass beds occur in highly turbid environments and cannot be mapped through the water column. The purpose of this project was to determine if and how airborne and satellite imaging systems could be used to map inter-tidal seagrass properties along the wet-tropics coast in north Queensland, Australia. The work aimed to: (1) identify the minimum level of seagrass foliage cover that could be detected from airborne and satellite imagery; and (2) define the minimum detectable differences in seagrass foliage cover in exposed intertidal seagrass beds. High resolution spectral-reflectance data (2040 bands, 350 – 2500nm) were collected over 40cm diameter plots from 240 sites on Magnetic Island, Pallarenda Beach and Green Island in North Queensland at spring low tides in April 2006. The seagrass species sampled were: Thalassia hemprechii, Halophila ovalis, Halodule uninerivs; Syringodium isoetifolium, Cymodocea serrulata, and Cymodoea rotundata. Digital photos were captured for each plot and used to derive estimates of seagrass species cover, epiphytic growth, micro- and macro-algal cover, and substrate colour. Sediment samples were also collected and analysed to measure the concentration of Chlorophyll-a associated with benthic micro-algae. The field reflectance spectra were analysed in combination with their corresponding seagrass species foliage cover levels to establish the minimum foliage projective cover required for each seagrass to be significantly different from bare substrate and substrate with algal cover. This analysis was repeated with reflectance spectra resampled to the bandpass functions of Quickbird, Ikonos, SPOT 5 and Landsat 7 ETM. Preliminary results indicate that conservative minimum detectable seagrass cover levels across most the species sampled were between 30%- 35% on dark substrates. Further analysis of these results will be conducted to determine their separability and satellite images and to assess the effects epiphytes and algal cover.
Resumo:
This review will discuss the use of manual grading scales, digital photography, and automated image analysis in the quantification of fundus changes caused by age-related macular disease. Digital imaging permits processing of images for enhancement, comparison, and feature quantification, and these techniques have been investigated for automated drusen analysis. The accuracy of automated analysis systems has been enhanced by the incorporation of interactive elements, such that the user is able to adjust the sensitivity of the system, or manually add and remove pixels. These methods capitalize on both computer and human image feature recognition and the advantage of computer-based methodologies for quantification. The histogram-based adaptive local thresholding system is able to extract useful information from the image without being affected by the presence of other structures. More recent developments involve compensation for fundus background reflectance, which has most recently been combined with the Otsu method of global thresholding. This method is reported to provide results comparable with manual stereo viewing. Developments in this area are likely to encourage wider use of automated techniques. This will make the grading of photographs easier and cheaper for clinicians and researchers. © 2007 Elsevier Inc. All rights reserved.
Resumo:
The aim of this Interdisciplinary Higher Degrees project was the development of a high-speed method of photometrically testing vehicle headlamps, based on the use of image processing techniques, for Lucas Electrical Limited. Photometric testing involves measuring the illuminance produced by a lamp at certain points in its beam distribution. Headlamp performance is best represented by an iso-lux diagram, showing illuminance contours, produced from a two-dimensional array of data. Conventionally, the tens of thousands of measurements required are made using a single stationary photodetector and a two-dimensional mechanical scanning system which enables a lamp's horizontal and vertical orientation relative to the photodetector to be changed. Even using motorised scanning and computerised data-logging, the data acquisition time for a typical iso-lux test is about twenty minutes. A detailed study was made of the concept of using a video camera and a digital image processing system to scan and measure a lamp's beam without the need for the time-consuming mechanical movement. Although the concept was shown to be theoretically feasible, and a prototype system designed, it could not be implemented because of the technical limitations of commercially-available equipment. An alternative high-speed approach was developed, however, and a second prototype syqtem designed. The proposed arrangement again uses an image processing system, but in conjunction with a one-dimensional array of photodetectors and a one-dimensional mechanical scanning system in place of a video camera. This system can be implemented using commercially-available equipment and, although not entirely eliminating the need for mechanical movement, greatly reduces the amount required, resulting in a predicted data acquisiton time of about twenty seconds for a typical iso-lux test. As a consequence of the work undertaken, the company initiated an 80,000 programme to implement the system proposed by the author.
Resumo:
Background Evaluation of anterior chamber depth (ACD) can potentially identify those patients at risk of angle-closure glaucoma. We aimed to: compare van Herick’s limbal chamber depth (LCDvh) grades with LCDorb grades calculated from the Orbscan anterior chamber angle values; determine Smith’s technique ACD and compare to Orbscan ACD; and calculate a constant for Smith’s technique using Orbscan ACD. Methods Eighty participants free from eye disease underwent LCDvh grading, Smith’s technique ACD, and Orbscan anterior chamber angle and ACD measurement. Results LCDvh overestimated grades by a mean of 0.25 (coefficient of repeatability [CR] 1.59) compared to LCDorb. Smith’s technique (constant 1.40 and 1.31) overestimated ACD by a mean of 0.33 mm (CR 0.82) and 0.12 mm (CR 0.79) respectively, compared to Orbscan. Using linear regression, we determined a constant of 1.22 for Smith’s slit-length method. Conclusions Smith’s technique (constant 1.31) provided an ACD that is closer to that found with Orbscan compared to a constant of 1.40 or LCDvh. Our findings also suggest that Smith’s technique would produce values closer to that obtained with Orbscan by using a constant of 1.22.
Resumo:
The human fundus is a complex structure that can be easily visualized and the world of ophthalmology is going through a golden era of new and exciting fundus imaging techniques; recent advances in technology have allowed a significant improvement in the imaging modalities clinicians have available to formulate a diagnostic and treatment plan for the patient, but there is constant on-going work to improve current technology and create new ideas in order to gather as much information as possible from the human fundus. In this article we shall summarize the imaging techniques available in the standard medical retina clinic (i.e. not limited to the research lab) and delineate the technologies that we believe will have a significant impact on the way clinicians will assess retinal and choroidal pathology in the not too distant future.
Resumo:
The principal theme of this thesis is the identification of additional factors affecting, and consequently to better allow, the prediction of soft contact lens fit. Various models have been put forward in an attempt to predict the parameters that influence soft contact lens fit dynamics; however, the factors that influence variation in soft lens fit are still not fully understood. The investigations in this body of work involved the use of a variety of different imaging techniques to both quantify the anterior ocular topography and assess lens fit. The use of Anterior-Segment Optical Coherence Tomography (AS-OCT) allowed for a more complete characterisation of the cornea and corneoscleral profile (CSP) than either conventional keratometry or videokeratoscopy alone, and for the collection of normative data relating to the CSP for a substantial sample size. The scleral face was identified as being rotationally asymmetric, the mean corneoscleral junction (CSJ) angle being sharpest nasally and becoming progressively flatter at the temporal, inferior and superior limbal junctions. Additionally, 77% of all CSJ angles were within ±50 of 1800, demonstrating an almost tangential extension of the cornea to form the paralimbal sclera. Use of AS-OCT allowed for a more robust determination of corneal diameter than that of white-to-white (WTW) measurement, which is highly variable and dependent on changes in peripheral corneal transparency. Significant differences in ocular topography were found between different ethnicities and sexes, most notably for corneal diameter and corneal sagittal height variables. Lens tightness was found to be significantly correlated with the difference between horizontal CSJ angles (r =+0.40, P =0.0086). Modelling of the CSP data gained allowed for prediction of up to 24% of the variance in contact lens fit; however, it was likely that stronger associations and an increase in the modelled prediction of variance in fit may have occurred had an objective method of lens fit assessment have been made. A subsequent investigation to determine the validity and repeatability of objective contact lens fit assessment using digital video capture showed no significant benefit over subjective evaluation. The technique, however, was employed in the ensuing investigation to show significant changes in lens fit between 8 hours (the longest duration of wear previously examined) and 16 hours, demonstrating that wearing time is an additional factor driving lens fit dynamics. The modelling of data from enhanced videokeratoscopy composite maps alone allowed for up to 77% of the variance in soft contact lens fit, and up to almost 90% to be predicted when used in conjunction with OCT. The investigations provided further insight into the ocular topography and factors affecting soft contact lens fit.