229 resultados para illumination conditions
em Queensland University of Technology - ePrints Archive
Resumo:
PURPOSE. This study was conducted to determine the magnitude of pupil center shift between the illumination conditions provided by corneal topography measurement (photopic illuminance) and by Hartmann-Shack aberrometry (mesopic illuminance) and to investigate the importance of this shift when calculating corneal aberrations and for the success of wavefront-guided surgical procedures. METHODS. Sixty-two subjects with emmetropia underwent corneal topography and Hartmann-Shack aberrometry. Corneal limbus and pupil edges were detected, and the differences between their respective centers were determined for both procedures. Corneal aberrations were calculated using the pupil centers for corneal topography and for Hartmann-Shack aberrometry. Bland-Altmann plots and paired t-tests were used to analyze the differences between corneal aberrations referenced to the two pupil centers. RESULTS. The mean magnitude (modulus) of the displacement of the pupil with the change of the illumination conditions was 0.21 ± 0.11 mm. The effect of this pupillary shift was manifest for coma corneal aberrations for 5-mm pupils, but the two sets of aberrations calculated with the two pupil positions were not significantly different. Sixty-eight percent of the population had differences in coma smaller than 0.05 µm, and only 4% had differences larger than 0.1 µm. Pupil displacement was not large enough to significantly affect other higher-order Zernike modes. CONCLUSIONS. Estimated corneal aberrations changed slightly between photopic and mesopic illumination conditions given by corneal topography and Hartmann-Shack aberrometry. However, this systematic pupil shift, according to the published tolerances ranges, is enough to deteriorate the optical quality below the theoretically predicted diffraction limit of wavefront-guided corneal surgery.
Resumo:
Current older adult capability data-sets fail to account for the effects of everyday environmental conditions on capability. This article details a study that investigates the effects of everyday ambient illumination conditions (overcast, 6000 lx; in-house lighting, 150 lx and street lighting, 7.5 lx) and contrast (90%, 70%, 50% and 30%) on the near visual acuity (VA) of older adults (n= 38, 65-87 years). VA was measured at a 1-m viewing distance using logarithm of minimum angle of resolution (LogMAR) acuity charts. Results from the study showed that for all contrast levels tested, VA decreased by 0.2 log units between the overcast and street lighting conditions. On average, in overcast conditions, participants could detect detail around 1.6 times smaller on the LogMAR charts compared with street lighting. VA also significantly decreased when contrast was reduced from 70% to 50%, and from 50% to 30% in each of the ambient illumination conditions. Practitioner summary: This article presents an experimental study that investigates the impact of everyday ambient illumination levels and contrast on older adults' VA. Results show that both factors have a significant effect on their VA. Findings suggest that environmental conditions need to be accounted for in older adult capability data-sets/designs.
Resumo:
Purpose: Computer vision has been widely used in the inspection of electronic components. This paper proposes a computer vision system for the automatic detection, localisation, and segmentation of solder joints on Printed Circuit Boards (PCBs) under different illumination conditions. Design/methodology/approach: An illumination normalization approach is applied to an image, which can effectively and efficiently eliminate the effect of uneven illumination while keeping the properties of the processed image the same as in the corresponding image under normal lighting conditions. Consequently special lighting and instrumental setup can be reduced in order to detect solder joints. These normalised images are insensitive to illumination variations and are used for the subsequent solder joint detection stages. In the segmentation approach, the PCB image is transformed from an RGB color space to a YIQ color space for the effective detection of solder joints from the background. Findings: The segmentation results show that the proposed approach improves the performance significantly for images under varying illumination conditions. Research limitations/implications: This paper proposes a front-end system for the automatic detection, localisation, and segmentation of solder joint defects. Further research is required to complete the full system including the classification of solder joint defects. Practical implications: The methodology presented in this paper can be an effective method to reduce cost and improve quality in production of PCBs in the manufacturing industry. Originality/value: This research proposes the automatic location, identification and segmentation of solder joints under different illumination conditions.
Resumo:
Deep Raman spectroscopy has been utilized for the standoff detection of concealed chemical threat agents from a distance of 15 meters under real life background illumination conditions. By using combined time and space resolved measurements, various explosive precursors hidden in opaque plastic containers were identified non-invasively. Our results confirm that combined time and space resolved Raman spectroscopy leads to higher selectivity towards the sub-layer over the surface layer as well as enhanced rejection of fluorescence from the container surface when compared to standoff spatially offset Raman spectroscopy. Raman spectra that have minimal interference from the packaging material and good signal-to-noise ratio were acquired within 5 seconds of measurement time. A new combined time and space resolved Raman spectrometer has been designed with nanosecond laser excitation and gated detection, making it of lower cost and complexity than picosecond-based laboratory systems.
In the pursuit of effective affective computing : the relationship between features and registration
Resumo:
For facial expression recognition systems to be applicable in the real world, they need to be able to detect and track a previously unseen person's face and its facial movements accurately in realistic environments. A highly plausible solution involves performing a "dense" form of alignment, where 60-70 fiducial facial points are tracked with high accuracy. The problem is that, in practice, this type of dense alignment had so far been impossible to achieve in a generic sense, mainly due to poor reliability and robustness. Instead, many expression detection methods have opted for a "coarse" form of face alignment, followed by an application of a biologically inspired appearance descriptor such as the histogram of oriented gradients or Gabor magnitudes. Encouragingly, recent advances to a number of dense alignment algorithms have demonstrated both high reliability and accuracy for unseen subjects [e.g., constrained local models (CLMs)]. This begs the question: Aside from countering against illumination variation, what do these appearance descriptors do that standard pixel representations do not? In this paper, we show that, when close to perfect alignment is obtained, there is no real benefit in employing these different appearance-based representations (under consistent illumination conditions). In fact, when misalignment does occur, we show that these appearance descriptors do work well by encoding robustness to alignment error. For this work, we compared two popular methods for dense alignment-subject-dependent active appearance models versus subject-independent CLMs-on the task of action-unit detection. These comparisons were conducted through a battery of experiments across various publicly available data sets (i.e., CK+, Pain, M3, and GEMEP-FERA). We also report our performance in the recent 2011 Facial Expression Recognition and Analysis Challenge for the subject-independent task.
Resumo:
The chief challenge facing persistent robotic navigation using vision sensors is the recognition of previously visited locations under different lighting and illumination conditions. The majority of successful approaches to outdoor robot navigation use active sensors such as LIDAR, but the associated weight and power draw of these systems makes them unsuitable for widespread deployment on mobile robots. In this paper we investigate methods to combine representations for visible and long-wave infrared (LWIR) thermal images with time information to combat the time-of-day-based limitations of each sensing modality. We calculate appearance-based match likelihoods using the state-of-the-art FAB-MAP [1] algorithm to analyse loop closure detection reliability across different times of day. We present preliminary results on a dataset of 10 successive traverses of a combined urban-parkland environment, recorded in 2-hour intervals from before dawn to after dusk. Improved location recognition throughout an entire day is demonstrated using the combined system compared with methods which use visible or thermal sensing alone.
Resumo:
Person re-identification involves recognising individuals in different locations across a network of cameras and is a challenging task due to a large number of varying factors such as pose (both subject and camera) and ambient lighting conditions. Existing databases do not adequately capture these variations, making evaluations of proposed techniques difficult. In this paper, we present a new challenging multi-camera surveillance database designed for the task of person re-identification. This database consists of 150 unscripted sequences of subjects travelling in a building environment though up to eight camera views, appearing from various angles and in varying illumination conditions. A flexible XML-based evaluation protocol is provided to allow a highly configurable evaluation setup, enabling a variety of scenarios relating to pose and lighting conditions to be evaluated. A baseline person re-identification system consisting of colour, height and texture models is demonstrated on this database.
Resumo:
The optimisation study of the fabrication of a compact TiO2 blocking layer (via Spray Pyrolysis Deposition) for poly (3-hexylthiopene) (P3HT) for Solid State Dye Sensitized Solar Cells (SDSCs) is reported. We used a novel spray TiO2 precursor solution composition obtained by adding acetylacetone to a conventional formulation (Diisopropoxytitanium bis (acetylacetonate) in ethanol). By Scanning Electron Microscopy a TiO2 layer with compact morphology and thickness of around 100 nmis shown. Through a Tafel plot analysis an enhancement of the device diode-like behaviour induced by the acetylacetone blocking layer respect to the conventional one is observed. Significantly, the device fabricatedwith the acetylacetone blocking layer shows an overall increment of the cell performance with respect to the cellwith the conventional one (DJsc/Jsc = +13.8%, DFF/FF = +39.7%, DPCE/PCE = +55.6%). A conversion efficiency optimumis found for 15 successive spray cycles where the diode-like behaviour of the acetylacetone blocking layer is more effective. Over three batches of cells (fabricated with P3HT and dye D35) an average conversion efficiency value of 3.9% (under a class A sun simulator with 1 sun A.M. 1.5 illumination conditions) was measured. From the best cell we fabricated a conversion efficiency value of 4.5% was extracted. This represents a significant increment with respect to previously reported values for P3HT/dye D35 based SDSCs.
Resumo:
New push-pull copolymers based on thiophene (donor) and benzothiadiazole (acceptor) units, poly[4,7-bis(3-dodecylthiophene-2-yl) benzothiadiazole-co- thiophene] (PT3B1) and poly[4,7-bis(3-dodecylthiophene-2-yl) benzothiadiazole-co-benzothiadiazole] (PT2B2), are designed and synthesized via Stille and Suzuki coupling routes respectively. Gel permeation chromatography shows the number average molecular weights are 31100 and 8400 g mol-1 for the two polymers, respectively. Both polymers have shown absorption throughout a wide range of the UV-vis region, from 300 to 650 nm. A significant red shift of the absorption edge is observed in thin films compared to solution of the copolymers; the optical band gap is in the range of 1.7 to 1.8 eV. Cyclic voltammetry indicates reversible oxidation and reduction processes with HOMO energy levels calculated to be in the range of 5.2 to 5.4 eV. Upon testing both materials for organic field-effect transistors (OFETs), PT3B1 showed a hole mobility of 6.1 × 10-4 cm2 V-1 s -1, while PT2B2 did not show any field effect transport. Both copolymers displayed a photovoltaic response when combined with a methanofullerene as an electron acceptor. The best performance was achieved when the copolymer PT3B1 was blended with [70]PCBM in a 1:4 ratio, exhibiting a short-circuit current of 7.27 mA cm-2, an open circuit voltage of 0.85 V, and a fill factor of 41% yielding a power conversion efficiency of 2.54% under simulated air mass (AM) 1.5 global (1.5 G) illumination conditions (100 mW cm-2). Similar devices utilizing PT2B2 in place of PT3B1 demonstrated reduced performance with a short-circuit current of 4.8 mA cm -2, an open circuit voltage of 0.73 V, and a fill factor of 30% resulting in a power conversion efficiency of roughly 1.06%.
Resumo:
Due to their unobtrusive nature, vision-based approaches to tracking sports players have been preferred over wearable sensors as they do not require the players to be instrumented for each match. Unfortunately however, due to the heavy occlusion between players, variation in resolution and pose, in addition to fluctuating illumination conditions, tracking players continuously is still an unsolved vision problem. For tasks like clustering and retrieval, having noisy data (i.e. missing and false player detections) is problematic as it generates discontinuities in the input data stream. One method of circumventing this issue is to use an occupancy map, where the field is discretised into a series of zones and a count of player detections in each zone is obtained. A series of frames can then be concatenated to represent a set-play or example of team behaviour. A problem with this approach though is that the compressibility is low (i.e. the variability in the feature space is incredibly high). In this paper, we propose the use of a bilinear spatiotemporal basis model using a role representation to clean-up the noisy detections which operates in a low-dimensional space. To evaluate our approach, we used a fully instrumented field-hockey pitch with 8 fixed high-definition (HD) cameras and evaluated our approach on approximately 200,000 frames of data from a state-of-the-art real-time player detector and compare it to manually labeled data.
Resumo:
Intrinsically photosensitive retinal ganglion cells (ipRGCs) in the eye transmit the environmental light level, projecting to the suprachiasmatic nucleus (SCN) (Berson, Dunn & Takao, 2002; Hattar, Liao, Takao, Berson & Yau, 2002), the location of the circadian biological clock, and the olivary pretectal nucleus (OPN) of the pretectum, the start of the pupil reflex pathway (Hattar, Liao, Takao, Berson & Yau, 2002; Dacey, Liao, Peterson, Robinson, Smith, Pokorny, Yau & Gamlin, 2005). The SCN synchronizes the circadian rhythm, a cycle of biological processes coordinated to the solar day, and drives the sleep/wake cycle by controlling the release of melatonin from the pineal gland (Claustrat, Brun & Chazot, 2005). Encoded photic input from ipRGCs to the OPN also contributes to the pupil light reflex (PLR), the constriction and recovery of the pupil in response to light. IpRGCs control the post-illumination component of the PLR, the partial pupil constriction maintained for > 30 sec after a stimulus offset (Gamlin, McDougal, Pokorny, Smith, Yau & Dacey, 2007; Kankipati, Girkin & Gamlin, 2010; Markwell, Feigl & Zele, 2010). It is unknown if intrinsic ipRGC and cone-mediated inputs to ipRGCs show circadian variation in their photon-counting activity under constant illumination. If ipRGCs demonstrate circadian variation of the pupil response under constant illumination in vivo, when in vitro ipRGC activity does not (Weng, Wong & Berson, 2009), this would support central control of the ipRGC circadian activity. A preliminary experiment was conducted to determine the spectral sensitivity of the ipRGC post-illumination pupil response under the experimental conditions, confirming the successful isolation of the ipRGC response (Gamlin, et al., 2007) for the circadian experiment. In this main experiment, we demonstrate that ipRGC photon-counting activity has a circadian rhythm under constant experimental conditions, while direct rod and cone contributions to the PLR do not. Intrinsic ipRGC contributions to the post-illumination pupil response decreased 2:46 h prior to melatonin onset for our group model, with the peak ipRGC attenuation occurring 1:25 h after melatonin onset. Our results suggest a centrally controlled evening decrease in ipRGC activity, independent of environmental light, which is temporally synchronized (demonstrates a temporal phase-advanced relationship) to the SCN mediated release of melatonin. In the future the ipRGC post-illumination pupil response could be developed as a fast, non-invasive measure of circadian rhythm. This study establishes a basis for future investigation of cortical feedback mechanisms that modulate ipRGC activity.
Resumo:
This thesis investigated a range of factors underlying the impact of uncorrected refractive errors on laboratory-based tests related to driving. Results showed that refractive blur had a pronounced effect on recognition of briefly presented targets, particularly under low light conditions. Blur, in combination with audio distracters, also slowed a participant's reactions to road hazards in video presentations. This suggests that recognition of suddenly appearing road hazards might be slowed in the presence of refractive blur, particularly under conditions of distraction. These findings highlight the importance of correcting even small refractive errors for driving, particularly at night.
Resumo:
This paper examines the feasibility of using vertical light pipes to naturally illuminate the central core of a multilevel building not reached by window light. The challenges addressed were finding a method to extract and distribute equal amounts of light at each level and designing collectors to improve the effectiveness of vertical light pipes in delivering low elevation sunlight to the interior. Extraction was achieved by inserting partially reflecting cones within transparent sections of the pipes at each floor level. Theory was formulated to estimate the partial reflectance necessary to provide equal light extraction at each level. Designs for daylight collectors formed from laser cut panels tilted above the light pipe were developed and the benefits and limitations of static collectors as opposed to collectors that follow the sun azimuth investigated. Performance was assessed with both basic and detailed mathematical simulation and by observations made with a five level model building under clear sky conditions.
Resumo:
Evidence has accumulated that rod activation under mesopic and scotopic light levels alters visual perception and performance. Here we review the most recent developments in the measurement of rod and cone contributions to mesopic color perception and temporal processing, with a focus on data measured using the four-primary photostimulator method that independently controls rod and cone excitations. We discuss the findings in the context of rod inputs to the three primary retinogeniculate pathways to understand rod contributions to mesopic vision. Additionally, we present evidence that hue perception is possible under scotopic, pure rod-mediated conditions that involves cortical mechanisms.