914 resultados para Hand-held cameras
Resumo:
This study aimed to determine the accuracy (and usability) of the Retinomax, a hand-held autorefractor, compared to measurements taken from hand-held retinoscopy (HHR) in a sample of normal 1-year-old children. The study was a method comparison set at four Community Child Health Clinics. Infants (n = 2079) of approximately 1 year of age were identified from birth/immunization records and their caregivers were contacted by mail. A total of 327 infants ranging in age from 46 weeks to 81 weeks (mean 61 weeks) participated in the study. The children underwent a full ophthalmic examination. Under cycloplegia, refraction was measured in each eye by streak retinoscopy (HHR) and then re-measured using the Retinomax autorefractor. Sphere, cylinder, axis of cylinder and spherical equivalent measurements were recorded for HHR and Retinomax instruments, and compared. Across the range of refractive errors measured, there was generally close agreement between the two examination methods, although the Retinomax consistently read around 0.3 D less hyperopic than HHR. Significantly more girls (72 infants, 47.7%), struggled during examination with the Retinomax than boys (52 infants, 29.5%) (P < 0.001). Agreement deteriorated between the two instruments if the patient struggled during the examination (P < 0.001). In general, the Retinomax would appear to be a useful screening instrument in early childhood. However, patient cooperation affects the accuracy of results and is an important con-sideration in determining whether this screening instrument should be adopted for measuring refractive errors in early infancy.
Resumo:
A unique hand-held gene gun is employed for ballistically delivering biomolecules to key cells in the skin and mucosa in the treatment of the major diseases. One of these types of devices, called the Contoured Shock Tube (CST), delivers powdered micro-particles to the skin with a narrow and highly controllable velocity distribution and a nominally uniform spatial distribution. In this paper, we apply a numerical approach to gain new insights in to the behavior of the CST prototype device. The drag correlations proposed by Henderson (1976), Igra and Takayama (1993) and Kurian and Das (1997) were applied to predict the micro-particle transport in a numerically simulated gas flow. Simulated pressure histories agree well with the corresponding static and Pitot pressure measurements, validating the CFD approach. The calculated velocity distributions show a good agreement, with the best prediction from Igra & Takayama correlation (maximum discrepancy of 5%). Key features of the gas dynamics and gas-particle interaction are discussed. Statistic analyses show a tight free-jet particle velocity distribution is achieved (570 +/- 14.7 m/s) for polystyrene particles (39 +/- 1 mu m), representative of a drug payload.
Resumo:
Fluorescence-enhanced optical imaging is an emerging non-invasive and non-ionizing modality towards breast cancer diagnosis. Various optical imaging systems are currently available, although most of them are limited by bulky instrumentation, or their inability to flexibly image different tissue volumes and shapes. Hand-held based optical imaging systems are a recent development for its improved portability, but are currently limited only to surface mapping. Herein, a novel optical imager, consisting primarily of a hand-held probe and a gain-modulated intensified charge coupled device (ICCD) detector, is developed towards both surface and tomographic breast imaging. The unique features of this hand-held probe based optical imager are its ability to; (i) image large tissue areas (5×10 sq. cm) in a single scan, (ii) reduce overall imaging time using a unique measurement geometry, and (iii) perform tomographic imaging for tumor three-dimensional (3-D) localization. Frequency-domain based experimental phantom studies have been performed on slab geometries (650 ml) under different target depths (1-2.5 cm), target volumes (0.45, 0.23 and 0.10 cc), fluorescence absorption contrast ratios (1:0, 1000:1 to 5:1), and number of targets (up to 3), using Indocyanine Green (ICG) as fluorescence contrast agents. An approximate extended Kalman filter based inverse algorithm has been adapted towards 3-D tomographic reconstructions. Single fluorescence target(s) was reconstructed when located: (i) up to 2.5 cm deep (at 1:0 contrast ratio) and 1.5 cm deep (up to 10:1 contrast ratio) for 0.45 cc-target; and (ii) 1.5 cm deep for target as small as 0.10 cc at 1:0 contrast ratio. In the case of multiple targets, two targets as close as 0.7 cm were tomographically resolved when located 1.5 cm deep. It was observed that performing multi-projection (here dual) based tomographic imaging using a priori target information from surface images, improved the target depth recovery over using single projection based imaging. From a total of 98 experimental phantom studies, the sensitivity and specificity of the imager was estimated as 81-86% and 43-50%, respectively. With 3-D tomographic imaging successfully demonstrated for the first time using a hand-held based optical imager, the clinical translation of this technology is promising upon further experimental validation from in-vitro and in-vivo studies.
Resumo:
Optical imaging is an emerging technology towards non-invasive breast cancer diagnostics. In recent years, portable and patient comfortable hand-held optical imagers are developed towards two-dimensional (2D) tumor detections. However, these imagers are not capable of three-dimensional (3D) tomography because they cannot register the positional information of the hand-held probe onto the imaged tissue. A hand-held optical imager has been developed in our Optical Imaging Laboratory with 3D tomography capabilities, as demonstrated from tissue phantom studies. The overall goal of my dissertation is towards the translation of our imager to the clinical setting for 3D tomographic imaging in human breast tissues. A systematic experimental approach was designed and executed as follows: (i) fast 2D imaging, (ii) coregistered imaging, and (iii) 3D tomographic imaging studies. (i) Fast 2D imaging was initially demonstrated in tissue phantoms (1% Liposyn solution) and in vitro (minced chicken breast and 1% Liposyn). A 0.45 cm3 fluorescent target at 1:0 contrast ratio was detectable up to 2.5 cm deep. Fast 2D imaging experiments performed in vivo with healthy female subjects also detected a 0.45 cm3 fluorescent target superficially placed ∼2.5 cm under the breast tissue. (ii) Coregistered imaging was automated and validated in phantoms with ∼0.19 cm error in the probe’s positional information. Coregistration also improved the target depth detection to 3.5 cm, from multi-location imaging approach. Coregistered imaging was further validated in-vivo , although the error in probe’s positional information increased to ∼0.9 cm (subject to soft tissue deformation and movement). (iii) Three-dimensional tomography studies were successfully demonstrated in vitro using 0.45 cm3 fluorescence targets. The feasibility of 3D tomography was demonstrated for the first time in breast tissues using the hand-held optical imager, wherein a 0.45 cm3 fluorescent target (superficially placed) was recovered along with artifacts. Diffuse optical imaging studies were performed in two breast cancer patients with invasive ductal carcinoma. The images showed greater absorption at the tumor cites (as observed from x-ray mammography, ultrasound, and/or MRI). In summary, my dissertation demonstrated the potential of a hand-held optical imager towards 2D breast tumor detection and 3D breast tomography, holding a promise for extensive clinical translational efforts.
Resumo:
According to the American Podiatric Medical Association, about 15 percent of the patients with diabetes would develop a diabetic foot ulcer. Furthermore, foot ulcerations leads to 85 percent of the diabetes-related amputations. Foot ulcers are caused due to a combination of factors, such as lack of feeling in the foot, poor circulation, foot deformities and the duration of the diabetes. To date, the wounds are inspected visually to monitor the wound healing, without any objective imaging approach to look before the wound’s surface. Herein, a non-contact, portable handheld optical device was developed at the Optical Imaging Laboratory as an objective approach to monitor wound healing in foot ulcer. This near-infrared optical technology is non-radiative, safe and fast in imaging large wounds on patients. The FIU IRB-approved study will involve subjects that have been diagnosed with diabetes by a physician and who have developed foot ulcers. Currently, in-vivo imaging studies are carried out every week on diabetic patients with foot ulcers at two clinical sites in Miami. Near-infrared images of the wound are captured on subjects every week and the data is processed using customdeveloped Matlab-based image processing tools. The optical contrast of the wound to its peripheries and the wound size are analyzed and compared from the NIR and white light images during the weekly systematic imaging of wound healing.
Resumo:
In this paper, a novel and approach for obtaining 3D models from video sequences captured with hand-held cameras is addressed. We define a pipeline that robustly deals with different types of sequences and acquiring devices. Our system follows a divide and conquer approach: after a frame decimation that pre-conditions the input sequence, the video is split into short-length clips. This allows to parallelize the reconstruction step which translates into a reduction in the amount of computational resources required. The short length of the clips allows an intensive search for the best solution at each step of reconstruction which robustifies the system. The process of feature tracking is embedded within the reconstruction loop for each clip as opposed to other approaches. A final registration step, merges all the processed clips to the same coordinate frame
Resumo:
Recent advances in mobile phone cameras have poised them to take over compact hand-held cameras as the consumer’s preferred camera option. Along with advances in the number of pixels, motion blur removal, face-tracking, and noise reduction algorithms have significant roles in the internal processing of the devices. An undesired effect of severe noise reduction is the loss of texture (i.e. low-contrast fine details) of the original scene. Current established methods for resolution measurement fail to accurately portray the texture loss incurred in a camera system. The development of an accurate objective method to identify the texture preservation or texture reproduction capability of a camera device is important in this regard. The ‘Dead Leaves’ target has been used extensively as a method to measure the modulation transfer function (MTF) of cameras that employ highly non-linear noise-reduction methods. This stochastic model consists of a series of overlapping circles with radii r distributed as r−3, and having uniformly distributed gray level, which gives an accurate model of occlusion in a natural setting and hence mimics a natural scene. This target can be used to model the texture transfer through a camera system when a natural scene is captured. In the first part of our study we identify various factors that affect the MTF measured using the ‘Dead Leaves’ chart. These include variations in illumination, distance, exposure time and ISO sensitivity among others. We discuss the main differences of this method with the existing resolution measurement techniques and identify the advantages. In the second part of this study, we propose an improvement to the current texture MTF measurement algorithm. High frequency residual noise in the processed image contains the same frequency content as fine texture detail, and is sometimes reported as such, thereby leading to inaccurate results. A wavelet thresholding based denoising technique is utilized for modeling the noise present in the final captured image. This updated noise model is then used for calculating an accurate texture MTF. We present comparative results for both algorithms under various image capture conditions.
Resumo:
Birth has been observed in a number of marsupial species and, in the studies to date, the newborn have crawled up to or across to the pouch. The method of birth in the quoll, a dasyurid, differs greatly from that observed in other marsupials. Births were recorded at normal speed using hand-held digital video cameras. Birth was heralded by a release of about 1 mL of watery fluid from the urogenital sinus followed by gelatinous material contained in either one or two tubes emanating from the sinus. The newborn, still encased in their placental membranes, were in the gelatinous material within a column. To exit this column, they had to grasp a hair and wriggle about 1 cm across to the pouch. In the pouch the newborn young had to compete for a teat. Although the quolls possessed 8 teats, the number of young in the pouch immediately after birth was 17, 16, 6, 16, 13 and 11 for each of the 6 quolls filmed. While birth has been described previously in another two dasyurids, the observers did not describe birth as reported here for the quoll. Nevertheless the movement of the newborn from the sinus to the pouch is so quick that this could have previously been missed. Filming birth from beneath and from the side allowed for a greater understanding of the birth process. Further studies are required to determine whether this use of a gelatinous material is part of the birth process in all dasyurids.
Resumo:
OBJECTIVES: To test the validity of a simple, rapid, field-adapted, portable hand-held impedancemeter (HHI) for the estimation of lean body mass (LBM) and percentage body fat (%BF) in African women, and to develop specific predictive equations. DESIGN: Cross-sectional observational study. SETTINGS: Dakar, the capital city of Senegal, West Africa. SUBJECTS: A total sample of 146 women volunteered. Their mean age was of 31.0 y (s.d. 9.1), weight 60.9 kg (s.d. 13.1) and BMI 22.6 kg/m(2) (s.d. 4.5). METHODS: Body composition values estimated by HHI were compared to those measured by whole body densitometry performed by air displacement plethysmography (ADP). The specific density of LBM in black subjects was taken into account for the calculation of %BF from body density. RESULTS: : Estimations from HHI showed a large bias (mean difference) of 5.6 kg LBM (P<10(-4)) and -8.8 %BF (P<10(-4)) and errors (s.d. of the bias) of 2.6 kg LBM and 3.7 %BF. In order to correct for the bias, specific predictive equations were developed. With the HHI result as a single predictor, error values were of 1.9 kg LBM and 3.7 %BF in the prediction group (n=100), and of 2.2 kg LBM and 3.6 %BF in the cross-validation group (n=46). Addition of anthropometrical predictors was not necessary. CONCLUSIONS: The HHI analyser significantly overestimated LBM and underestimated %BF in African women. After correction for the bias, the body compartments could easily be estimated in African women by using the HHI result in an appropriate prediction equation with a good precision. It remains to be seen whether a combination of arm and leg impedancemetry in order to take into account lower limbs would further improve the prediction of body composition in Africans.
Resumo:
A single picture provides a largely incomplete representation of the scene one is looking at. Usually it reproduces only a limited spatial portion of the scene according to the standpoint and the viewing angle, besides it contains only instantaneous information. Thus very little can be understood on the geometrical structure of the scene, the position and orientation of the observer with respect to it remaining also hard to guess. When multiple views, taken from different positions in space and time, observe the same scene, then a much deeper knowledge is potentially achievable. Understanding inter-views relations enables construction of a collective representation by fusing the information contained in every single image. Visual reconstruction methods confront with the formidable, and still unanswered, challenge of delivering a comprehensive representation of structure, motion and appearance of a scene from visual information. Multi-view visual reconstruction deals with the inference of relations among multiple views and the exploitation of revealed connections to attain the best possible representation. This thesis investigates novel methods and applications in the field of visual reconstruction from multiple views. Three main threads of research have been pursued: dense geometric reconstruction, camera pose reconstruction, sparse geometric reconstruction of deformable surfaces. Dense geometric reconstruction aims at delivering the appearance of a scene at every single point. The construction of a large panoramic image from a set of traditional pictures has been extensively studied in the context of image mosaicing techniques. An original algorithm for sequential registration suitable for real-time applications has been conceived. The integration of the algorithm into a visual surveillance system has lead to robust and efficient motion detection with Pan-Tilt-Zoom cameras. Moreover, an evaluation methodology for quantitatively assessing and comparing image mosaicing algorithms has been devised and made available to the community. Camera pose reconstruction deals with the recovery of the camera trajectory across an image sequence. A novel mosaic-based pose reconstruction algorithm has been conceived that exploit image-mosaics and traditional pose estimation algorithms to deliver more accurate estimates. An innovative markerless vision-based human-machine interface has also been proposed, so as to allow a user to interact with a gaming applications by moving a hand held consumer grade camera in unstructured environments. Finally, sparse geometric reconstruction refers to the computation of the coarse geometry of an object at few preset points. In this thesis, an innovative shape reconstruction algorithm for deformable objects has been designed. A cooperation with the Solar Impulse project allowed to deploy the algorithm in a very challenging real-world scenario, i.e. the accurate measurements of airplane wings deformations.
Resumo:
The ability to view and interact with 3D models has been happening for a long time. However, vision-based 3D modeling has only seen limited success in applications, as it faces many technical challenges. Hand-held mobile devices have changed the way we interact with virtual reality environments. Their high mobility and technical features, such as inertial sensors, cameras and fast processors, are especially attractive for advancing the state of the art in virtual reality systems. Also, their ubiquity and fast Internet connection open a path to distributed and collaborative development. However, such path has not been fully explored in many domains. VR systems for real world engineering contexts are still difficult to use, especially when geographically dispersed engineering teams need to collaboratively visualize and review 3D CAD models. Another challenge is the ability to rendering these environments at the required interactive rates and with high fidelity. In this document it is presented a virtual reality system mobile for visualization, navigation and reviewing large scale 3D CAD models, held under the CEDAR (Collaborative Engineering Design and Review) project. It’s focused on interaction using different navigation modes. The system uses the mobile device's inertial sensors and camera to allow users to navigate through large scale models. IT professionals, architects, civil engineers and oil industry experts were involved in a qualitative assessment of the CEDAR system, in the form of direct user interaction with the prototypes and audio-recorded interviews about the prototypes. The lessons learned are valuable and are presented on this document. Subsequently it was prepared a quantitative study on the different navigation modes to analyze the best mode to use it in a given situation.
Resumo:
The ability to view and interact with 3D models has been happening for a long time. However, vision-based 3D modeling has only seen limited success in applications, as it faces many technical challenges. Hand-held mobile devices have changed the way we interact with virtual reality environments. Their high mobility and technical features, such as inertial sensors, cameras and fast processors, are especially attractive for advancing the state of the art in virtual reality systems. Also, their ubiquity and fast Internet connection open a path to distributed and collaborative development. However, such path has not been fully explored in many domains. VR systems for real world engineering contexts are still difficult to use, especially when geographically dispersed engineering teams need to collaboratively visualize and review 3D CAD models. Another challenge is the ability to rendering these environments at the required interactive rates and with high fidelity. In this document it is presented a virtual reality system mobile for visualization, navigation and reviewing large scale 3D CAD models, held under the CEDAR (Collaborative Engineering Design and Review) project. It’s focused on interaction using different navigation modes. The system uses the mobile device's inertial sensors and camera to allow users to navigate through large scale models. IT professionals, architects, civil engineers and oil industry experts were involved in a qualitative assessment of the CEDAR system, in the form of direct user interaction with the prototypes and audio-recorded interviews about the prototypes. The lessons learned are valuable and are presented on this document. Subsequently it was prepared a quantitative study on the different navigation modes to analyze the best mode to use it in a given situation.
Resumo:
Abstract: Students who are actively involved in the learning process tend to develop deeper knowledge than those in traditional lecture classrooms (Beatty, 2007; Crouch & Mazur, 2001; Hake, 1998; Richardson, 2003). An instructional strategy that promotes active involvement is Peer Instruction. This strategy encourages student engagement by asking them to respond to conceptual multiple-choice questions intermittently throughout the lecture. These questions can be responded to by using an electronic hand-held device commonly known as a clicker that enables students’ responses to be displayed on a screen. When clickers are not available, a show of hands or other means can be used. The literature suggests that the impact on student learning is the same, whether the teacher uses clickers or simply asks students to raise their hand or use flashcards when responding to the questions (Lasry, 2007). This critical analysis argues that using clickers to respond to these in-class conceptual multiple-choice questions as opposed to using a show of hands leads to deeper conceptual understanding, better performance on tests, and greater overall enjoyment during class.||Résumé: Les étudiants qui sont activement impliqués dans le processus d'apprentissage ont tendance à développer des connaissances plus approfondies que lors de cours traditionnels (Beatty, 2007; Crouch & Mazur, 2001; Hake, 1998; Richardson, 2003). Une stratégie d'enseignement qui favorise la participation active est l’apprentissage par les pairs. Cette stratégie d’enseignement encourage l'engagement des élèves en leur demandant de répondre à des questions à choix multiples conceptuelles à plusieurs reprises durant le déroulement du cours. Ces questions peuvent être répondues à l'aide d'un appareil portatif électronique (un « clicker ») qui permet d’afficher de façon anonyme les réponses des élèves sur un écran. Si les clickers ne sont pas disponibles, les étudiants peuvent aussi répondre aux questions en levant la main. La littérature suggère que la méthode utilisée n’a pas d’impact sur l'apprentissage des élèves, que l'enseignant utilise des clickers, des flashcards ou qu’il demande simplement aux élèves de lever la main pour répondre aux questions (Lasry, 2007). Cette analyse critique fait valoir que l'utilisation de clickers pour répondre à ces questions à choix multiples conceptuelles en classe, plutôt que de faire lever la main aux étudiants, résulte en une compréhension conceptuelle plus approfondie, une meilleure performance aux examens et plus de plaisir pendant les cours.
Resumo:
A miniaturised gas analyser is described and evaluated based on the use of a substrate-integrated hollow waveguide (iHWG) coupled to a microsized near-infrared spectrophotometer comprising a linear variable filter and an array of InGaAs detectors. This gas sensing system was applied to analyse surrogate samples of natural fuel gas containing methane, ethane, propane and butane, quantified by using multivariate regression models based on partial least square (PLS) algorithms and Savitzky-Golay 1(st) derivative data preprocessing. The external validation of the obtained models reveals root mean square errors of prediction of 0.37, 0.36, 0.67 and 0.37% (v/v), for methane, ethane, propane and butane, respectively. The developed sensing system provides particularly rapid response times upon composition changes of the gaseous sample (approximately 2 s) due the minute volume of the iHWG-based measurement cell. The sensing system developed in this study is fully portable with a hand-held sized analyser footprint, and thus ideally suited for field analysis. Last but not least, the obtained results corroborate the potential of NIR-iHWG analysers for monitoring the quality of natural gas and petrochemical gaseous products.