121 resultados para limits of visual detection
Resumo:
This paper presents visual detection and classification of light vehicles and personnel on a mine site.We capitalise on the rapid advances of ConvNet based object recognition but highlight that a naive black box approach results in a significant number of false positives. In particular, the lack of domain specific training data and the unique landscape in a mine site causes a high rate of errors. We exploit the abundance of background-only images to train a k-means classifier to complement the ConvNet. Furthermore, localisation of objects of interest and a reduction in computation is enabled through region proposals. Our system is tested on over 10km of real mine site data and we were able to detect both light vehicles and personnel. We show that the introduction of our background model can reduce the false positive rate by an order of magnitude.
Resumo:
Spoken term detection (STD) is the task of looking up a spoken term in a large volume of speech segments. In order to provide fast search, speech segments are first indexed into an intermediate representation using speech recognition engines which provide multiple hypotheses for each speech segment. Approximate matching techniques are usually applied at the search stage to compensate the poor performance of automatic speech recognition engines during indexing. Recently, using visual information in addition to audio information has been shown to improve phone recognition performance, particularly in noisy environments. In this paper, we will make use of visual information in the form of lip movements of the speaker in indexing stage and will investigate its effect on STD performance. Particularly, we will investigate if gains in phone recognition accuracy will carry through the approximate matching stage to provide similar gains in the final audio-visual STD system over a traditional audio only approach. We will also investigate the effect of using visual information on STD performance in different noise environments.
Resumo:
A simple and rapid method of analysis for mercury ions (Hg2+) and cysteine (Cys) was developed with the use of graphene quantum dots (GQDs) as a fluorescent probe. In the presence of GQDs, Hg2+ cations are absorbed on their negatively charged surface by means of electrostatic interactions. Thus, the fluorescence (FL) of the GQDs would be significantly quenched as a result of the FL charge transfer, e.g. 92% quenching at 450 nm occurs for a 5 μmol L−1 Hg2+ solution. However, when Cys was added, a significant FL enhancement was observed (510% at 450 nm for a 8.0 μmol L−1 Cys solution), and Hg2+ combined with Cys rather than with the GQDs in an aqueous solution. This occurred because a strong metalsingle bondthiol bond formed, displacing the weak electrostatic interactions, and this resulted in an FL enhancement of the GQDs. The limits of detection (LOD) for Hg2+ and Cys were 0.439 nmol L−1 and 4.5 nmol L−1, respectively. Also, this method was used successfully to analyze Hg2+ and Cys in spiked water samples.
Resumo:
Environmental data usually include measurements, such as water quality data, which fall below detection limits, because of limitations of the instruments or of certain analytical methods used. The fact that some responses are not detected needs to be properly taken into account in statistical analysis of such data. However, it is well-known that it is challenging to analyze a data set with detection limits, and we often have to rely on the traditional parametric methods or simple imputation methods. Distributional assumptions can lead to biased inference and justification of distributions is often not possible when the data are correlated and there is a large proportion of data below detection limits. The extent of bias is usually unknown. To draw valid conclusions and hence provide useful advice for environmental management authorities, it is essential to develop and apply an appropriate statistical methodology. This paper proposes rank-based procedures for analyzing non-normally distributed data collected at different sites over a period of time in the presence of multiple detection limits. To take account of temporal correlations within each site, we propose an optimal linear combination of estimating functions and apply the induced smoothing method to reduce the computational burden. Finally, we apply the proposed method to the water quality data collected at Susquehanna River Basin in United States of America, which dearly demonstrates the advantages of the rank regression models.
Resumo:
Power calculation and sample size determination are critical in designing environmental monitoring programs. The traditional approach based on comparing the mean values may become statistically inappropriate and even invalid when substantial proportions of the response values are below the detection limits or censored because strong distributional assumptions have to be made on the censored observations when implementing the traditional procedures. In this paper, we propose a quantile methodology that is robust to outliers and can also handle data with a substantial proportion of below-detection-limit observations without the need of imputing the censored values. As a demonstration, we applied the methods to a nutrient monitoring project, which is a part of the Perth Long-Term Ocean Outlet Monitoring Program. In this example, the sample size required by our quantile methodology is, in fact, smaller than that by the traditional t-test, illustrating the merit of our method.
Resumo:
Purpose Little is known about the prevalence of refractive error, binocular vision, and other visual conditions in Australian Indigenous children. This is important given the association of these visual conditions with reduced reading performance in the wider population, which may also contribute to the suboptimal reading performance reported in this population. The aim of this study was to develop a visual profile of Queensland Indigenous children. Methods Vision testing was performed on 595 primary schoolchildren in Queensland, Australia. Vision parameters measured included visual acuity, refractive error, color vision, nearpoint of convergence, horizontal heterophoria, fusional vergence range, accommodative facility, AC/A ratio, visual motor integration, and rapid automatized naming. Near heterophoria, nearpoint of convergence, and near fusional vergence range were used to classify convergence insufficiency (CI). Results Although refractive error (Indigenous, 10%; non-Indigenous, 16%; p = 0.04) and strabismus (Indigenous, 0%; non-Indigenous, 3%; p = 0.03) were significantly less common in Indigenous children, CI was twice as prevalent (Indigenous, 10%; non-Indigenous, 5%; p = 0.04). Reduced visual information processing skills were more common in Indigenous children (reduced visual motor integration [Indigenous, 28%; non-Indigenous, 16%; p < 0.01] and slower rapid automatized naming [Indigenous, 67%; non-Indigenous, 59%; p = 0.04]). The prevalence of visual impairment (reduced visual acuity) and color vision deficiency was similar between groups. Conclusions Indigenous children have less refractive error and strabismus than their non-Indigenous peers. However, CI and reduced visual information processing skills were more common in this group. Given that vision screenings primarily target visual acuity assessment and strabismus detection, this is an important finding as many Indigenous children with CI and reduced visual information processing may be missed. Emphasis should be placed on identifying children with CI and reduced visual information processing given the potential effect of these conditions on school performance
Resumo:
There is an increased interest in the use of Unmanned Aerial Vehicles for load transportation from environmental remote sensing to construction and parcel delivery. One of the main challenges is accurate control of the load position and trajectory. This paper presents an assessment of real flight trials for the control of an autonomous multi-rotor with a suspended slung load using only visual feedback to determine the load position. This method uses an onboard camera to take advantage of a common visual marker detection algorithm to robustly detect the load location. The load position is calculated using an onboard processor, and transmitted over a wireless network to a ground station integrating MATLAB/SIMULINK and Robotic Operating System (ROS) and a Model Predictive Controller (MPC) to control both the load and the UAV. To evaluate the system performance, the position of the load determined by the visual detection system in real flight is compared with data received by a motion tracking system. The multi-rotor position tracking performance is also analyzed by conducting flight trials using perfect load position data and data obtained only from the visual system. Results show very accurate estimation of the load position (~5% Offset) using only the visual system and demonstrate that the need for an external motion tracking system is not needed for this task.
Resumo:
Improving the performance of a incident detection system was essential to minimize the effect of incidents. A new method of incident detection was brought forward in this paper based on an in-car terminal which consisted of GPS module, GSM module and control module as well as some optional parts such as airbag sensors, mobile phone positioning system (MPPS) module, etc. When a driver or vehicle discovered the freeway incident and initiated an alarm report the incident location information located by GPS, MPPS or both would be automatically send to a transport management center (TMC), then the TMC would confirm the accident with a closed-circuit television (CCTV) or other approaches. In this method, detection rate (DR), time to detect (TTD) and false alarm rate (FAR) were more important performance targets. Finally, some feasible means such as management mode, education mode and suitable accident confirming approaches had been put forward to improve these targets.
Resumo:
Structural health monitoring (SHM) is the term applied to the procedure of monitoring a structure’s performance, assessing its condition and carrying out appropriate retrofitting so that it performs reliably, safely and efficiently. Bridges form an important part of a nation’s infrastructure. They deteriorate due to age and changing load patterns and hence early detection of damage helps in prolonging the lives and preventing catastrophic failures. Monitoring of bridges has been traditionally done by means of visual inspection. With recent developments in sensor technology and availability of advanced computing resources, newer techniques have emerged for SHM. Acoustic emission (AE) is one such technology that is attracting attention of engineers and researchers all around the world. This paper discusses the use of AE technology in health monitoring of bridge structures, with a special focus on analysis of recorded data. AE waves are stress waves generated by mechanical deformation of material and can be recorded by means of sensors attached to the surface of the structure. Analysis of the AE signals provides vital information regarding the nature of the source of emission. Signal processing of the AE waveform data can be carried out in several ways and is predominantly based on time and frequency domains. Short time Fourier transform and wavelet analysis have proved to be superior alternatives to traditional frequency based analysis in extracting information from recorded waveform. Some of the preliminary results of the application of these analysis tools in signal processing of recorded AE data will be presented in this paper.
Resumo:
A novel voltammetric method for simultaneous determination of the glucocorticoid residues prednisone, prednisolone, and dexamethasone was developed. All three compounds were reduced at a mercury electrode in a Britton-Robinson buffer (pH 3.78), and well-defined voltammetric waves were observed. However, the voltammograms of these three compounds overlapped seriously and showed nonlinear character, and thus, it was difficult to analyze the compounds individually in their mixtures. In this work, two chemometrics methods, principal component regression (PCR) and partial least squares (PLS), were applied to resolve the overlapped voltammograms, and the calibration models were established for simultaneous determination of these compounds. Under the optimum experimental conditions, the limits of detection (LOD) were 5.6, 8.3, and 16.8 µg l-1 for prednisone, prednisolone, and dexamethasone, respectively. The proposed method was also applied for the determination of these glucocorticoid residues in the rabbit plasma and human urine samples with satisfactory results.
Resumo:
Bridges are an important part of society's infrastructure and reliable methods are necessary to monitor them and ensure their safety and efficiency. Bridges deteriorate with age and early detection of damage helps in prolonging the lives and prevent catastrophic failures. Most bridges still in used today were built decades ago and are now subjected to changes in load patterns, which can cause localized distress and if not corrected can result in bridge failure. In the past, monitoring of structures was usually done by means of visual inspection and tapping of the structures using a small hammer. Recent advancements of sensors and information technologies have resulted in new ways of monitoring the performance of structures. This paper briefly describes the current technologies used in bridge structures condition monitoring with its prime focus in the application of acoustic emission (AE) technology in the monitoring of bridge structures and its challenges.
Resumo:
Purpose: There have been few studies of visual temporal processing of myopic eyes. This study investigated the visual performance of emmetropic and myopic eyes using a backward visual masking location task. Methods: Data were collected for 39 subjects (15 emmetropes, 12 stable myopes, 12 progressing myopes). In backward visual masking, a target’s visibility is reduced by a mask presented in quick succession ‘after’ the target. The target and mask stimuli were presented at different interstimulus intervals (from 12 to 300 ms). The task involved locating the position of a target letter with both a higher (seven per cent) and a lower (five per cent) contrast. Results: Emmetropic subjects had significantly better performance for the lower contrast location task than the myopes (F2,36 = 22.88; p < 0.001) but there was no difference between the progressing and stable myopic groups (p = 0.911). There were no differences between the groups for the higher contrast location task (F2,36 = 0.72, p = 0.495). No relationship between task performance and either the magnitude of myopia or axial length was found for either task. Conclusions: A location task deficit was observed in myopes only for lower contrast stimuli. Both emmetropic and myopic groups had better performance for the higher contrast task compared to the lower contrast task, with myopes showing considerable improvement. This suggests that five per cent contrast may be the contrast threshold required to bias the task towards the magnocellular system (where myopes have a temporal processing deficit). Alternatively, the task may be sensitive to the contrast sensitivity of the observer.
Resumo:
This study is the first to investigate the effect of prolonged reading on reading performance and visual functions in students with low vision. The study focuses on one of the most common modes of achieving adequate magnification for reading by students with low vision, their close reading distance (proximal or relative distance magnification). Close reading distances impose high demands on near visual functions, such as accommodation and convergence. Previous research on accommodation in children with low vision shows that their accommodative responses are reduced compared to normal vision. In addition, there is an increased lag of accommodation for higher stimulus levels as may occur at close reading distance. Reduced accommodative responses in low vision and higher lag of accommodation at close reading distances together could impact on reading performance of students with low vision especially during prolonged reading tasks. The presence of convergence anomalies could further affect reading performance. Therefore, the aims of the present study were 1) To investigate the effect of prolonged reading on reading performance in students with low vision 2) To investigate the effect of prolonged reading on visual functions in students with low vision. This study was conducted as cross-sectional research on 42 students with low vision and a comparison group of 20 students with normal vision, aged 7 to 20 years. The students with low vision had vision impairments arising from a range of causes and represented a typical group of students with low vision, with no significant developmental delays, attending school in Brisbane, Australia. All participants underwent a battery of clinical tests before and after a prolonged reading task. An initial reading-specific history and pre-task measurements that included Bailey-Lovie distance and near visual acuities, Pelli-Robson contrast sensitivity, ocular deviations, sensory fusion, ocular motility, near point of accommodation (pull-away method), accuracy of accommodation (Monocular Estimation Method (MEM)) retinoscopy and Near Point of Convergence (NPC) (push-up method) were recorded for all participants. Reading performance measures were Maximum Oral Reading Rates (MORR), Near Text Visual Acuity (NTVA) and acuity reserves using Bailey-Lovie text charts. Symptoms of visual fatigue were assessed using the Convergence Insufficiency Symptom Survey (CISS) for all participants. Pre-task measurements of reading performance and accuracy of accommodation and NPC were compared with post-task measurements, to test for any effects of prolonged reading. The prolonged reading task involved reading a storybook silently for at least 30 minutes. The task was controlled for print size, contrast, difficulty level and content of the reading material. Silent Reading Rate (SRR) was recorded every 2 minutes during prolonged reading. Symptom scores and visual fatigue scores were also obtained for all participants. A visual fatigue analogue scale (VAS) was used to assess visual fatigue during the task, once at the beginning, once at the middle and once at the end of the task. In addition to the subjective assessments of visual fatigue, tonic accommodation was monitored using a photorefractor (PlusoptiX CR03™) every 6 minutes during the task, as an objective assessment of visual fatigue. Reading measures were done at the habitual reading distance of students with low vision and at 25 cms for students with normal vision. The initial history showed that the students with low vision read for significantly shorter periods at home compared to the students with normal vision. The working distances of participants with low vision ranged from 3-25 cms and half of them were not using any optical devices for magnification. Nearly half of the participants with low vision were able to resolve 8-point print (1M) at 25 cms. Half of the participants in the low vision group had ocular deviations and suppression at near. Reading rates were significantly reduced in students with low vision compared to those of students with normal vision. In addition, there were a significantly larger number of participants in the low vision group who could not sustain the 30-minute task compared to the normal vision group. However, there were no significant changes in reading rates during or following prolonged reading in either the low vision or normal vision groups. Individual changes in reading rates were independent of their baseline reading rates, indicating that the changes in reading rates during prolonged reading cannot be predicted from a typical clinical assessment of reading using brief reading tasks. Contrary to previous reports the silent reading rates of the students with low vision were significantly lower than their oral reading rates, although oral and silent reading was assessed using different methods. Although the visual acuity, contrast sensitivity, near point of convergence and accuracy of accommodation were significantly poorer for the low vision group compared to those of the normal vision group, there were no significant changes in any of these visual functions following prolonged reading in either group. Interestingly, a few students with low vision (n =10) were found to be reading at a distance closer than their near point of accommodation. This suggests a decreased sensitivity to blur. Further evaluation revealed that the equivalent intrinsic refractive errors (an estimate of the spherical dioptirc defocus which would be expected to yield a patient’s visual acuity in normal subjects) were significantly larger for the low vision group compared to those of the normal vision group. As expected, accommodative responses were significantly reduced for the low vision group compared to the expected norms, which is consistent with their close reading distances, reduced visual acuity and contrast sensitivity. For those in the low vision group who had an accommodative error exceeding their equivalent intrinsic refractive errors, a significant decrease in MORR was found following prolonged reading. The silent reading rates however were not significantly affected by accommodative errors in the present study. Suppression also had a significant impact on the changes in reading rates during prolonged reading. The participants who did not have suppression at near showed significant decreases in silent reading rates during and following prolonged reading. This impact of binocular vision at near on prolonged reading was possibly due to the high demands on convergence. The significant predictors of MORR in the low vision group were age, NTVA, reading interest and reading comprehension, accounting for 61.7% of the variances in MORR. SRR was not significantly influenced by any factors, except for the duration of the reading task sustained; participants with higher reading rates were able to sustain a longer reading duration. In students with normal vision, age was the only predictor of MORR. Participants with low vision also reported significantly greater visual fatigue compared to the normal vision group. Measures of tonic accommodation however were little influenced by visual fatigue in the present study. Visual fatigue analogue scores were found to be significantly associated with reading rates in students with low vision and normal vision. However, the patterns of association between visual fatigue and reading rates were different for SRR and MORR. The participants with low vision with higher symptom scores had lower SRRs and participants with higher visual fatigue had lower MORRs. As hypothesized, visual functions such as accuracy of accommodation and convergence did have an impact on prolonged reading in students with low vision, for students whose accommodative errors were greater than their equivalent intrinsic refractive errors, and for those who did not suppress one eye. Those students with low vision who have accommodative errors higher than their equivalent intrinsic refractive errors might significantly benefit from reading glasses. Similarly, considering prisms or occlusion for those without suppression might reduce the convergence demands in these students while using their close reading distances. The impact of these prescriptions on reading rates, reading interest and visual fatigue is an area of promising future research. Most importantly, it is evident from the present study that a combination of factors such as accommodative errors, near point of convergence and suppression should be considered when prescribing reading devices for students with low vision. Considering these factors would also assist rehabilitation specialists in identifying those students who are likely to experience difficulty in prolonged reading, which is otherwise not reflected during typical clinical reading assessments.
Resumo:
Creating an acceptance of Visual Effects (VFX) as an effective non-fiction communication tool has the potential to significantly boost return on investment for filmmakers producing documentary. Obtaining this acceptance does not necessarily mean rethinking the way documentary is defined, however, the need to address negative perceptions presently dominant within the production industry does exist; specifically, the misguided judgement that use of sequences which include visual effects discredits a filmmaker's attempt to represent reality. After completing a documentary utilising a traditional model of production for methodology, the question of how to increase this film's marketability is then examined by testing the specific assertion that Visual Effects is capable of increasing the level of appeal inherent within the documentary genre. Whilst this area of research is speculative, qualifying Visual Effects as an acceptable communication tool in non-fiction narratives will allow the documentary sector to benefit from increased production capabilities.
Resumo:
High-rate flooding attacks (aka Distributed Denial of Service or DDoS attacks) continue to constitute a pernicious threat within the Internet domain. In this work we demonstrate how using packet source IP addresses coupled with a change-point analysis of the rate of arrival of new IP addresses may be sufficient to detect the onset of a high-rate flooding attack. Importantly, minimizing the number of features to be examined, directly addresses the issue of scalability of the detection process to higher network speeds. Using a proof of concept implementation we have shown how pre-onset IP addresses can be efficiently represented using a bit vector and used to modify a “white list” filter in a firewall as part of the mitigation strategy.