974 resultados para Visual magnitudes measurements
Resumo:
Background: Deviated nasal septum (DNS) is one of the major causes of nasal obstruction. Polyvinylidene fluoride (PVDF) nasal sensor is the new technique developed to assess the nasal obstruction caused by DNS. This study evaluates the PVDF nasal sensor measurements in comparison with PEAK nasal inspiratory flow (PNIF) measurements and visual analog scale (VAS) of nasal obstruction. Methods: Because of piezoelectric property, two PVDF nasal sensors provide output voltage signals corresponding to the right and left nostril when they are subjected to nasal airflow. The peak-to-peak amplitude of the voltage signal corresponding to nasal airflow was analyzed to assess the nasal obstruction. PVDF nasal sensor and PNIF were performed on 30 healthy subjects and 30 DNS patients. Receiver operating characteristic was used to analyze the DNS of these two methods. Results: Measurements of PVDF nasal sensor strongly correlated with findings of PNIF (r = 0.67; p < 0.01) in DNS patients. A significant difference (p < 0.001) was observed between PVDF nasal sensor measurements and PNIF measurements of the DNS and the control group. A cutoff between normal and pathological of 0.51 Vp-p for PVDF nasal sensor and 120 L/min for PNIF was calculated. No significant difference in terms of sensitivity of PVDF nasal sensor and PNIF (89.7% versus 82.6%) and specificity (80.5% versus 78.8%) was calculated. Conclusion: The result shows that PVDF measurements closely agree with PNIF findings. Developed PVDF nasal sensor is an objective method that is simple, inexpensive, fast, and portable for determining DNS in clinical practice.
Resumo:
The commercial far-range (>10 m) spatial data collection methods for acquiring infrastructure’s geometric data are not completely automated because of the necessary manual pre- and/or post-processing work. The required amount of human intervention and, in some cases, the high equipment costs associated with these methods impede their adoption by the majority of infrastructure mapping activities. This paper presents an automated stereo vision-based method, as an alternative and inexpensive solution, to producing a sparse Euclidean 3D point cloud of an infrastructure scene utilizing two video streams captured by a set of two calibrated cameras. In this process SURF features are automatically detected and matched between each pair of stereo video frames. 3D coordinates of the matched feature points are then calculated via triangulation. The detected SURF features in two successive video frames are automatically matched and the RANSAC algorithm is used to discard mismatches. The quaternion motion estimation method is then used along with bundle adjustment optimization to register successive point clouds. The method was tested on a database of infrastructure stereo video streams. The validity and statistical significance of the results were evaluated by comparing the spatial distance of randomly selected feature points with their corresponding tape measurements.
Resumo:
Air pockets, one kind of concrete surface defects, are often created on formed concrete surfaces during concrete construction. Their existence undermines the desired appearance and visual uniformity of architectural concrete. Therefore, measuring the impact of air pockets on the concrete surface in the form of air pockets is vital in assessing the quality of architectural concrete. Traditionally, such measurements are mainly based on in-situ manual inspections, the results of which are subjective and heavily dependent on the inspectors’ own criteria and experience. Often, inspectors may make different assessments even when inspecting the same concrete surface. In addition, the need for experienced inspectors costs owners or general contractors more in inspection fees. To alleviate these problems, this paper presents a methodology that can measure air pockets quantitatively and automatically. In order to achieve this goal, a high contrast, scaled image of a concrete surface is acquired from a fixed distance range and then a spot filter is used to accurately detect air pockets with the help of an image pyramid. The properties of air pockets (the number, the size, and the occupation area of air pockets) are subsequently calculated. These properties are used to quantify the impact of air pockets on the architectural concrete surface. The methodology is implemented in a C++ based prototype and tested on a database of concrete surface images. Comparisons with manual tests validated its measuring accuracy. As a result, the methodology presented in this paper can increase the reliability of concrete surface quality assessment
Resumo:
Both commercial and scientific applications often need to transform color images into gray-scale images, e. g., to reduce the publication cost in printing color images or to help color blind people see visual cues of color images. However, conventional color to gray algorithms are not ready for practical applications because they encounter the following problems: 1) Visual cues are not well defined so it is unclear how to preserve important cues in the transformed gray-scale images; 2) some algorithms have extremely high time cost for computation; and 3) some require human-computer interactions to have a reasonable transformation. To solve or at least reduce these problems, we propose a new algorithm based on a probabilistic graphical model with the assumption that the image is defined over a Markov random field. Thus, color to gray procedure can be regarded as a labeling process to preserve the newly well-defined visual cues of a color image in the transformed gray-scale image. Visual cues are measurements that can be extracted from a color image by a perceiver. They indicate the state of some properties of the image that the perceiver is interested in perceiving. Different people may perceive different cues from the same color image and three cues are defined in this paper, namely, color spatial consistency, image structure information, and color channel perception priority. We cast color to gray as a visual cue preservation procedure based on a probabilistic graphical model and optimize the model based on an integral minimization problem. We apply the new algorithm to both natural color images and artificial pictures, and demonstrate that the proposed approach outperforms representative conventional algorithms in terms of effectiveness and efficiency. In addition, it requires no human-computer interactions.
Resumo:
This paper presents the maximum weighted stream posterior (MWSP) model as a robust and efficient stream integration method for audio-visual speech recognition in environments, where the audio or video streams may be subjected to unknown and time-varying corruption. A significant advantage of MWSP is that it does not require any specific measurements of the signal in either stream to calculate appropriate stream weights during recognition, and as such it is modality-independent. This also means that MWSP complements and can be used alongside many of the other approaches that have been proposed in the literature for this problem. For evaluation we used the large XM2VTS database for speaker-independent audio-visual speech recognition. The extensive tests include both clean and corrupted utterances with corruption added in either/both the video and audio streams using a variety of types (e.g., MPEG-4 video compression) and levels of noise. The experiments show that this approach gives excellent performance in comparison to another well-known dynamic stream weighting approach and also compared to any fixed-weighted integration approach in both clean conditions or when noise is added to either stream. Furthermore, our experiments show that the MWSP approach dynamically selects suitable integration weights on a frame-by-frame basis according to the level of noise in the streams and also according to the naturally fluctuating relative reliability of the modalities even in clean conditions. The MWSP approach is shown to maintain robust recognition performance in all tested conditions, while requiring no prior knowledge about the type or level of noise.
Resumo:
Purpose: The authors estimated the retinal nerve fiber layer height (RNFLH) measurements in patients with glaucoma compared with those in age-matched healthy subjects as obtained by the laser scanning tomography and assessed the relationship between RNFLH measurements and optic and visual field status. Methods: Parameters of optic nerve head topography and RNFLH were evaluated in 125 eyes of 21 healthy subjects and 104 patients with glaucoma using the Heidelberg Retina Tomograph ([HRT] Heidelberg Engineering GmbH, Heidelberg, Germany) for the entire disc area and for the superior 70°(50°temporal and 20°nasal to the vertical midline) and inferior 70°sectors of the optic disc. The mean deviation of the visual field, as determined by the Humphrey program 24-2 (Humphrey Instruments, Inc., San Leonardo, CA, U.S.A) was calculated in the entire field and in the superior and inferior Bjerrum area. Result: Retinal nerve fiber layer height parameters (mean RNFLH and RNFL cross-sectional area) were decreased significantly in patients with glaucoma compared with healthy individuals. Retinal nerve fiber layer height parameters was correlated strongly with rim volume, rim area, and cup/disc area ratio. Of the various topography measures, retinal nerve fiber layer (RNFL) parameters and cup/disc area ratio showed the strongest correlation with visual field mean deviation in patients with glaucoma. Conclusion: Retinal nerve fiber layer height measures were reduced substantially in patients with glaucoma compared with age-matched healthy subjects. Retinal nerve fiber layer height was correlated strongly with topographic optic disc parameters and visual field changes in patients with glaucoma.
Resumo:
Purpose: To compare two fast threshold strategies of visual field assessment; SITA Fast (HSF) and Tendency Orientated Perimetry (TOP), in detecting visual field loss in patients with glaucoma. Methods: Seventy-six glaucoma, ocular hypertensive and normal patients had HSF and TOP performed in random order. Quantitative comparisons for the global visual field indices - mean deviation and defect (MD) for HSF and TOP, and pattern standard deviation (PSD) for HSF and loss variance (LV) for TOP - were made using correlation coefficients. Humphrey global parameters were converted to Octopus equivalents, and method comparison analysis was used to determine agreement between the two strategies. Test duration times were compared using t-test. Sensitivity and specificity for these two algorithms were determined according to predetermined criteria. Results: High correlation coefficient values were obtained for MD measurements between HSF and TOP (r=-0.89, P
Resumo:
Purpose: To report any differences in the visual acuity (VA) recording method used in peer-reviewed ophthalmology clinical studies over the past decade. Methods: We reviewed the method of assessing and reporting VA in 160 clinical studies from 2 UK and 2 US peer-reviewed journals, published in 1994 and 2004. Results: The method used to assess VA was specified in 62.5% of UK-published and 60% of US-published papers. In the results sections of the UK publications the VA measurements presented were Snellen acuity (n = 58), logMAR acuity (n = 20) and symbol acuity (n = 1). Similarly in the US publications the VA was recorded in the results section using Snellen acuity (n = 60) and logMAR acuity (n = 14). Overall 10% of the authors appeared to convert Snellen acuity measurements to logMAR format. Five studies (3%) chose to express Snellen-type acuities in decimal form, a method which can easily lead to confusion given the increased use of logMAR scoring systems. Conclusion: The authors recommend that to ensure comparable visual results between studies and different study populations it would be useful if clinical scientists worked to standardized VA testing protocols and reported results in a manner consistent with the way in which they are measured. Copyright © 2008 S. Karger AG.
Resumo:
The underground scenarios are one of the most challenging environments for accurate and precise 3d mapping where hostile conditions like absence of Global Positioning Systems, extreme lighting variations and geometrically smooth surfaces may be expected. So far, the state-of-the-art methods in underground modelling remain restricted to environments in which pronounced geometric features are abundant. This limitation is a consequence of the scan matching algorithms used to solve the localization and registration problems. This paper contributes to the expansion of the modelling capabilities to structures characterized by uniform geometry and smooth surfaces, as is the case of road and train tunnels. To achieve that, we combine some state of the art techniques from mobile robotics, and propose a method for 6DOF platform positioning in such scenarios, that is latter used for the environment modelling. A visual monocular Simultaneous Localization and Mapping (MonoSLAM) approach based on the Extended Kalman Filter (EKF), complemented by the introduction of inertial measurements in the prediction step, allows our system to localize himself over long distances, using exclusively sensors carried on board a mobile platform. By feeding the Extended Kalman Filter with inertial data we were able to overcome the major problem related with MonoSLAM implementations, known as scale factor ambiguity. Despite extreme lighting variations, reliable visual features were extracted through the SIFT algorithm, and inserted directly in the EKF mechanism according to the Inverse Depth Parametrization. Through the 1-Point RANSAC (Random Sample Consensus) wrong frame-to-frame feature matches were rejected. The developed method was tested based on a dataset acquired inside a road tunnel and the navigation results compared with a ground truth obtained by post-processing a high grade Inertial Navigation System and L1/L2 RTK-GPS measurements acquired outside the tunnel. Results from the localization strategy are presented and analyzed.
Resumo:
O ensaio de dureza, e mais concretamente o ensaio de micro dureza Vickers, é no universo dos ensaios mecânicos um dos mais utilizados quer seja na indústria, no ensino ou na investigação e desenvolvimento de produto no âmbito das ciências dos materiais. Na grande maioria dos casos, a utilização deste ensaio tem como principal aplicação a caracterização ou controlo da qualidade de fabrico de materiais metálicos. Sendo um ensaio de relativa simplicidade de execução, rapidez e com resultados comparáveis e relacionáveis a outras grandezas físicas das propriedades dos materiais. Contudo, e tratando-se de um método de ensaio cuja intervenção humana é importante, na medição da indentação gerada por penetração mecânica através de um sistema ótico, não deixa de exibir algumas debilidades que daí advêm, como sendo o treino dos técnicos e respetivas acuidades visuais, fenómenos de fadiga visual que afetam os resultados ao longo de um turno de trabalho; ora estes fenómenos afetam a repetibilidade e reprodutibilidade dos resultados obtidos no ensaio. O CINFU possui um micro durómetro Vickers, cuja realização dos ensaios depende de um técnico treinado para a execução do mesmo, apresentando todas as debilidades já mencionadas e que o tornou elegível para o estudo e aplicação de uma solução alternativa. Assim, esta dissertação apresenta o desenvolvimento de uma solução alternativa ao método ótico convencional na medição de micro dureza Vickers. Utilizando programação em LabVIEW da National Instruments, juntamente com as ferramentas de visão computacional (NI Vision), o programa começa por solicitar ao técnico a seleção da câmara para aquisição da imagem digital acoplada ao micro durómetro, seleção do método de ensaio (Força de ensaio); posteriormente o programa efetua o tratamento da imagem (aplicação de filtros para eliminação do ruído de fundo da imagem original), segue-se, por indicação do operador, a zona de interesse (ROI) e por sua vez são identificadas automaticamente os vértices da calote e respetivas distâncias das diagonais geradas concluindo, após aceitação das mesmas, com o respetivo cálculo de micro dureza resultante. Para validação dos resultados foram utilizados blocos-padrão de dureza certificada (CRM), cujos resultados foram satisfatórios, tendo-se obtido um elevado nível de exatidão nas medições efetuadas. Por fim, desenvolveu-se uma folha de cálculo em Excel com a determinação da incerteza associada às medições de micro dureza Vickers. Foram então comparados os resultados nas duas metodologias possíveis, pelo método ótico convencional e pela utilização das ferramentas de visão computacional, tendo-se obtido bons resultados com a solução proposta.
Resumo:
Several recent studies have described the period of impaired alertness and performance known as sleep inertia that occurs upon awakening from a full night of sleep. They report that sleep inertia dissipates in a saturating exponential manner, the exact time course being task dependent, but generally persisting for one to two hours. A number of factors, including sleep architecture, sleep depth and circadian variables are also thought to affect the duration and intensity. The present study sought to replicate their findings for subjective alertness and reaction time and also to examine electrophysiological changes through the use of event-related potentials (ERPs). Secondly, several sleep parameters were examined for potential effects on the initial intensity of sleep inertia. Ten participants spent two consecutive nights and subsequent mornings in the sleep lab. Sleep architecture was recorded for a fiiU nocturnal episode of sleep based on participants' habitual sleep patterns. Subjective alertness and performance was measured for a 90-minute period after awakening. Alertness was measured every five minutes using the Stanford Sleepiness Scale (SSS) and a visual analogue scale (VAS) of sleepiness. An auditory tone also served as the target stimulus for an oddball task designed to examine the NlOO and P300 components ofthe ERP waveform. The five-minute oddball task was presented at 15-minute intervals over the initial 90-minutes after awakening to obtain six measures of average RT and amplitude and latency for NlOO and P300. Standard polysomnographic recording were used to obtain digital EEG and describe the night of sleep. Power spectral analyses (FFT) were used to calculate slow wave activity (SWA) as a measure of sleep depth for the whole night, 90-minutes before awakening and five minutes before awakening.
Resumo:
This thesis proposes a solution to the problem of estimating the motion of an Unmanned Underwater Vehicle (UUV). Our approach is based on the integration of the incremental measurements which are provided by a vision system. When the vehicle is close to the underwater terrain, it constructs a visual map (so called "mosaic") of the area where the mission takes place while, at the same time, it localizes itself on this map, following the Concurrent Mapping and Localization strategy. The proposed methodology to achieve this goal is based on a feature-based mosaicking algorithm. A down-looking camera is attached to the underwater vehicle. As the vehicle moves, a sequence of images of the sea-floor is acquired by the camera. For every image of the sequence, a set of characteristic features is detected by means of a corner detector. Then, their correspondences are found in the next image of the sequence. Solving the correspondence problem in an accurate and reliable way is a difficult task in computer vision. We consider different alternatives to solve this problem by introducing a detailed analysis of the textural characteristics of the image. This is done in two phases: first comparing different texture operators individually, and next selecting those that best characterize the point/matching pair and using them together to obtain a more robust characterization. Various alternatives are also studied to merge the information provided by the individual texture operators. Finally, the best approach in terms of robustness and efficiency is proposed. After the correspondences have been solved, for every pair of consecutive images we obtain a list of image features in the first image and their matchings in the next frame. Our aim is now to recover the apparent motion of the camera from these features. Although an accurate texture analysis is devoted to the matching pro-cedure, some false matches (known as outliers) could still appear among the right correspon-dences. For this reason, a robust estimation technique is used to estimate the planar transformation (homography) which explains the dominant motion of the image. Next, this homography is used to warp the processed image to the common mosaic frame, constructing a composite image formed by every frame of the sequence. With the aim of estimating the position of the vehicle as the mosaic is being constructed, the 3D motion of the vehicle can be computed from the measurements obtained by a sonar altimeter and the incremental motion computed from the homography. Unfortunately, as the mosaic increases in size, image local alignment errors increase the inaccuracies associated to the position of the vehicle. Occasionally, the trajectory described by the vehicle may cross over itself. In this situation new information is available, and the system can readjust the position estimates. Our proposal consists not only in localizing the vehicle, but also in readjusting the trajectory described by the vehicle when crossover information is obtained. This is achieved by implementing an Augmented State Kalman Filter (ASKF). Kalman filtering appears as an adequate framework to deal with position estimates and their associated covariances. Finally, some experimental results are shown. A laboratory setup has been used to analyze and evaluate the accuracy of the mosaicking system. This setup enables a quantitative measurement of the accumulated errors of the mosaics created in the lab. Then, the results obtained from real sea trials using the URIS underwater vehicle are shown.
Resumo:
A driver controls a car by turning the steering wheel or by pressing on the accelerator or the brake. These actions are modelled by Gaussian processes, leading to a stochastic model for the motion of the car. The stochastic model is the basis of a new filter for tracking and predicting the motion of the car, using measurements obtained by fitting a rigid 3D model to a monocular sequence of video images. Experiments show that the filter easily outperforms traditional filters.
Resumo:
Defensive behaviors, such as withdrawing your hand to avoid potentially harmful approaching objects, rely on rapid sensorimotor transformations between visual and motor coordinates. We examined the reference frame for coding visual information about objects approaching the hand during motor preparation. Subjects performed a simple visuomanual task while a task-irrelevant distractor ball rapidly approached a location either near to or far from their hand. After the distractor ball appearance, single pulses of transcranial magnetic stimulation were delivered over the subject's primary motor cortex, eliciting motor evoked potentials (MEPs) in their responding hand. MEP amplitude was reduced when the ball approached near the responding hand, both when the hand was on the left and the right of the midline. Strikingly, this suppression occurred very early, at 70-80ms after ball appearance, and was not modified by visual fixation location. Furthermore, it was selective for approaching balls, since static visual distractors did not modulate MEP amplitude. Together with additional behavioral measurements, we provide converging evidence for automatic hand-centered coding of visual space in the human brain.
Resumo:
Measurements of atmospheric corona currents have been made for over 100 years to indicate the atmospheric electric field. Corona currents vary substantially, in polarity and in magnitude. The instrument described here uses a sharp point sensor connected to a temperature compensated bi-polar logarithmic current amplifier. Calibrations over a range of currents from ±10 fA to ±3 μA and across ±20 ◦C show it has an excellent logarithmic response over six orders of magnitude from 1 pA to 1 μA in both polarities for the range of atmospheric temperatures likely to be encountered in the southern UK. Comparison with atmospheric electric field measurements during disturbed weather confirms that bipolar electric fields induce corona currents of corresponding sign, with magnitudes ∼0.5 μA.