968 resultados para RMS detector
Resumo:
Road surface macrotexture is identified as one of the factors contributing to the surface's skid resistance. Existing methods of quantifying the surface macrotexture, such as the sand patch test and the laser profilometer test, are either expensive or intrusive, requiring traffic control. High-resolution cameras have made it possible to acquire good quality images from roads for the automated analysis of texture depth. In this paper, a granulometric method based on image processing is proposed to estimate road surface texture coarseness distribution from their edge profiles. More than 1300 images were acquired from two different sites, extending to a total of 2.96 km. The images were acquired using camera orientations of 60 and 90 degrees. The road surface is modeled as a texture of particles, and the size distribution of these particles is obtained from chord lengths across edge boundaries. The mean size from each distribution is compared with the sensor measured texture depth obtained using a laser profilometer. By tuning the edge detector parameters, a coefficient of determination of up to R2 = 0.94 between the proposed method and the laser profilometer method was obtained. The high correlation is also confirmed by robust calibration parameters that enable the method to be used for unseen data after the method has been calibrated over road surface data with similar surface characteristics and under similar imaging conditions.
Resumo:
Condition monitoring of diesel engines can prevent unpredicted engine failures and the associated consequence. This paper presents an experimental study of the signal characteristics of a 4-cylinder diesel engine under various loading conditions. Acoustic emission, vibration and in-cylinder pressure signals were employed to study the effectiveness of these techniques for condition monitoring and identifying symptoms of incipient failures. An event driven synchronous averaging technique was employed to average the quasi-periodic diesel engine signal in the time domain to eliminate or minimize the effect of engine speed and amplitude variations on the analysis of condition monitoring signal. It was shown that acoustic emission (AE) is a better technique than vibration method for condition monitor of diesel engines due to its ability to produce high quality signals (i.e., excellent signal to noise ratio) in a noisy diesel engine environment. It was found that the peak amplitude of AE RMS signals correlating to the impact-like combustion related events decreases in general due to a more stable mechanical process of the engine as the loading increases. A small shift in the exhaust valve closing time was observed as the engine load increases which indicates a prolong combustion process in the cylinder (to produce more power). On the contrary, peak amplitudes of the AE RMS attributing to fuel injection increase as the loading increases. This can be explained by the increase fuel friction caused by the increase volume flow rate during the injection. Multiple AE pulses during the combustion process were identified in the study, which were generated by the piston rocking motion and the interaction between the piston and the cylinder wall. The piston rocking motion is caused by the non-uniform pressure distribution acting on the piston head as a result of the non-linear combustion process of the engine. The rocking motion ceased when the pressure in the cylinder chamber stabilized.
Resumo:
Acoustic emission has been found effective in offering earlier fault detection and improving identification capabilities of faults. However, the sensors are inherently uncalibrated. This paper presents a source to sensor paths calibration technique which can lead to diagnosis of faults in a small size multi-cylinder diesel engine. Preliminary analysis of the acoustic emission (AE) signals is outlined, including time domain, time-frequency domain, and the root mean square (RMS) energy. The results reveal how the RMS energy of a source propagates to the adjacent sensors. The findings lead to allocate the source and estimate its inferences to the adjacent sensor, and finally help to diagnose the small size diesel engines by minimising the crosstalk from multiple cylinders.
Resumo:
Travel time is an important network performance measure and it quantifies congestion in a manner easily understood by all transport users. In urban networks, travel time estimation is challenging due to number of reasons such as, fluctuations in traffic flow due to traffic signals, significant flow to/from mid link sinks/sources, etc. The classical analytical procedure utilizes cumulative plots at upstream and downstream locations for estimating travel time between the two locations. In this paper, we discuss about the issues and challenges with classical analytical procedure such as its vulnerability to non conservation of flow between the two locations. The complexity with respect to exit movement specific travel time is discussed. Recently, we have developed a methodology utilising classical procedure to estimate average travel time and its statistic on urban links (Bhaskar, Chung et al. 2010). Where, detector, signal and probe vehicle data is fused. In this paper we extend the methodology for route travel time estimation and test its performance using simulation. The originality is defining cumulative plots for each exit turning movement utilising historical database which is self updated after each estimation. The performance is also compared with a method solely based on probe (Probe-only). The performance of the proposed methodology has been found insensitive to different route flow, with average accuracy of more than 94% given a probe per estimation interval which is more than 5% increment in accuracy with respect to Probe-only method.
Resumo:
Solar ultraviolet (UV) radiation causes a range of skin disorders as well as affecting vision and the immune system. It also inhibits development of plants and animals. UV radiation monitoring is used routinely in some locations in order to alert the population to harmful solar radiation levels. There is ongoing research to develop UV-selective-sensors [1–3]. A personal, inexpensive and simple UV-selective-sensor would be desirable to measure UV intensity exposure. A prototype of such a detector has been developed and evaluated in our laboratory. It comprises a sealed two-electrode photoelectrochemical cell (PEC) based on nanocrystalline TiO2. This abundant semiconducting oxide, which is innocuous and very sta-ble, is the subject of intense study at present due to its application in dye sensitized solar cells (DSSC) [4]. Since TiO2 has a wide band gap (EG = 3.0 eV for rutile and EG = 3.2 eV for anatase), it is inher-ently UV-selective, so that UV filters are not required. This further reduces the cost of the proposed photodetector in comparison with conventional silicon detectors. The PEC is a semiconductor–electrolyte device that generates a photovoltage when it is illuminated and a corresponding photocur-rent if the external circuit is closed. The device does not require external bias, and the short circuit current is generally a linear function of illumination intensity. This greatly simplifies the elec-trical circuit needed when using the PEC as a photodetector. DSSC technology, which is based on a PEC containing nanocrystalline TiO2 sensitized with a ruthenium dye, holds out the promise of solar cells that are significantly cheaper than traditional silicon solar cells. The UV-sensor proposed in this paper relies on the cre-ation of electron–hole pairs in the TiO2 by UV radiation, so that it would be even cheaper than a DSSC since no sensitizer dye is needed. Although TiO2 has been reported as a suitable material for UV sensing [3], to the best of our knowledge, the PEC configuration described in the present paper is a new approach. In the present study, a novel double-layer TiO2 structure has been investigated. Fabrication is based on a simple and inexpensive technique for nanostructured TiO2 deposition using microwave-activated chemical bath deposition (MW-CBD) that has been reported recently [5]. The highly transparent TiO2 (anatase) films obtained are densely packed, and they adhere very well to the transparent oxide (TCO) substrate [6]. These compact layers have been studied as contacting layers in double-layer TiO2 structures for DSSC since improvement of electron extraction at the TiO2–TCO interface is expected [7]. Here we compare devices incorporating a single mesoporous nanocrystalline TiO2 structure with devices based on a double structure in which a MW-CBD film is situated between the TCO and the mesoporous nanocrystalline TiO2 layer. Besides improving electron extraction, this film could also help to block recombination of electrons transferred to the TCO with oxidized species in the electrolyte, as has been reported in the case of DSSC for compact TiO2 films obtained by other deposition tech-niques [8,9]. The two types of UV-selective sensors were characterized in detail. The current voltage characteristics, spectral response, inten-sity dependence of short circuit current and response times were measured and analyzed in order to evaluate the potential of sealed mesoporous TiO2-based photoelectrochemical cells (PEC) as low cost personal UV-photodetectors.
Resumo:
We measured wave aberrations over the central 42° x 32° visual field for a 5 mm pupil for groups of 10 emmetropic (mean spherical equivalent 0.11 ± 0.50 D) and 9 myopic (MSE -3.67 ± 1.91 D) young adults. Relative peripheral refractive errors over the measured field were generally myopic in both groups. Mean values of were almost constant across the measured field and were more positive in emmetropes (+0.023 ± 0.043 microns) than in myopes (-0.007 ± 0.045 microns). Coma varied more rapidly with field angle in myopes: modeling suggested that this difference reflected the differences in mean anterior corneal shape and axial length in the two groups. In general however, overall levels of RMS aberration differed only modestly between the two groups, implying that it is unlikely that high levels of aberration contribute to myopia development.
Resumo:
Changes in peripheral aberrations, particularly higher-order aberrations, as a function of accommodation have received little attention. Wavefront aberrations were measured for the right eyes of 9 young adult emmetropes at 38 field positions in the central 42 x 32 degrees of the visual field. Subjects accommodated monocularly to targets at vergences of either 0.3 or 4.0 D. Wavefront data for a 5 mm diameter pupil were analyzed either in terms of the vector components of refraction or Zernike coefficients and total RMS wavefront aberrations. Relative peripheral refractive error (RPRE) was myopic at both accommodation demands and showed only a slight, not statistically significant, hypermetropic shift in the vertical meridian with the higher accommodation demand. There was little change in the astigmatic components of refraction or the higher-order Zernike coefficients, apart from fourth-order spherical aberration which became more negative (by 0.10 µm) at all field locations. Although it has been suggested that nearwork and the state of peripheral refraction may play some role in myopia development, for most of our adult emmetropes any changes with accommodation in RPRE and aberration were small. Hence it seems unlikely that such changes can be of importance to late-onset myopisation.
Resumo:
Local image feature extractors that select local maxima of the determinant of Hessian function have been shown to perform well and are widely used. This paper introduces the negative local minima of the determinant of Hessian function for local feature extraction. The properties and scale-space behaviour of these features are examined and found to be desirable for feature extraction. It is shown how this new feature type can be implemented along with the existing local maxima approach at negligible extra processing cost. Applications to affine covariant feature extraction and sub-pixel precise corner extraction are demonstrated. Experimental results indicate that the new corner detector is more robust to image blur and noise than existing methods. It is also accurate for a broader range of corner geometries. An affine covariant feature extractor is implemented by combining the minima of the determinant of Hessian with existing scale and shape adaptation methods. This extractor can be implemented along side the existing Hessian maxima extractor simply by finding both minima and maxima during the initial extraction stage. The minima features increase the number of correspondences by two to four fold. The additional minima features are very distinct from the maxima features in descriptor space and do not make the matching process more ambiguous.
Resumo:
This paper provides fundamental understanding for the use of cumulative plots for travel time estimation on signalized urban networks. Analytical modeling is performed to generate cumulative plots based on the availability of data: a) Case-D, for detector data only; b) Case-DS, for detector data and signal timings; and c) Case-DSS, for detector data, signal timings and saturation flow rate. The empirical study and sensitivity analysis based on simulation experiments have observed the consistency in performance for Case-DS and Case-DSS, whereas, for Case-D the performance is inconsistent. Case-D is sensitive to detection interval and signal timings within the interval. When detection interval is integral multiple of signal cycle then it has low accuracy and low reliability. Whereas, for detection interval around 1.5 times signal cycle both accuracy and reliability are high.
Resumo:
Virtual environments can provide, through digital games and online social interfaces, extremely exciting forms of interactive entertainment. Because of their capability in displaying and manipulating information in natural and intuitive ways, such environments have found extensive applications in decision support, education and training in the health and science domains amongst others. Currently, the burden of validating both the interactive functionality and visual consistency of a virtual environment content is entirely carried out by developers and play-testers. While considerable research has been conducted in assisting the design of virtual world content and mechanics, to date, only limited contributions have been made regarding the automatic testing of the underpinning graphics software and hardware. The aim of this thesis is to determine whether the correctness of the images generated by a virtual environment can be quantitatively defined, and automatically measured, in order to facilitate the validation of the content. In an attempt to provide an environment-independent definition of visual consistency, a number of classification approaches were developed. First, a novel model-based object description was proposed in order to enable reasoning about the color and geometry change of virtual entities during a play-session. From such an analysis, two view-based connectionist approaches were developed to map from geometry and color spaces to a single, environment-independent, geometric transformation space; we used such a mapping to predict the correct visualization of the scene. Finally, an appearance-based aliasing detector was developed to show how incorrectness too, can be quantified for debugging purposes. Since computer games heavily rely on the use of highly complex and interactive virtual worlds, they provide an excellent test bed against which to develop, calibrate and validate our techniques. Experiments were conducted on a game engine and other virtual worlds prototypes to determine the applicability and effectiveness of our algorithms. The results show that quantifying visual correctness in virtual scenes is a feasible enterprise, and that effective automatic bug detection can be performed through the techniques we have developed. We expect these techniques to find application in large 3D games and virtual world studios that require a scalable solution to testing their virtual world software and digital content.
Resumo:
In 1999 Richards compared the accuracy of commercially available motion capture systems commonly used in biomechanics. Richards identified that in static tests the optical motion capture systems generally produced RMS errors of less than 1.0 mm. During dynamic tests, the RMS error increased to up to 4.2 mm in some systems. In the last 12 years motion capture systems have continued to evolve and now include high-resolution CCD or CMOS image sensors, wireless communication, and high full frame sampling frequencies. In addition to hardware advances, there have also been a number of advances in software, which includes improved calibration and tracking algorithms, real time data streaming, and the introduction of the c3d standard. These advances have allowed the system manufactures to maintain a high retail price in the name of advancement. In areas such as gait analysis and ergonomics many of the advanced features such as high resolution image sensors and high sampling frequencies are not required due to the nature of the task often investigated. Recently Natural Point introduced low cost cameras, which on face value appear to be suitable as at very least a high quality teaching tool in biomechanics and possibly even a research tool when coupled with the correct calibration and tracking software. The aim of the study was therefore to compare both the linear accuracy and quality of angular kinematics from a typical high end motion capture system and a low cost system during a simple task.
Resumo:
This paper presents techniques which can lead to diagnosis of faults in a small size multi-cylinder diesel engine. Preliminary analysis of the acoustic emission (AE) signals is outline, including time-frequency analysis and selection of optimum frequency band.The results of applying mean field independent component analysis (MFICA) to separate the AE root mean square (RMS) signals and the effects of changing parameter values are also outlined. The results on separation of RMS signals show thsi technique has the potential of increasing the probability to successfully identify the AE events associated with the various mechanical events within the combustion process of multi-cylinder diesel engines.
Resumo:
Accurate and efficient thermal-infrared (IR) camera calibration is important for advancing computer vision research within the thermal modality. This paper presents an approach for geometrically calibrating individual and multiple cameras in both the thermal and visible modalities. The proposed technique can be used to correct for lens distortion and to simultaneously reference both visible and thermal-IR cameras to a single coordinate frame. The most popular existing approach for the geometric calibration of thermal cameras uses a printed chessboard heated by a flood lamp and is comparatively inaccurate and difficult to execute. Additionally, software toolkits provided for calibration either are unsuitable for this task or require substantial manual intervention. A new geometric mask with high thermal contrast and not requiring a flood lamp is presented as an alternative calibration pattern. Calibration points on the pattern are then accurately located using a clustering-based algorithm which utilizes the maximally stable extremal region detector. This algorithm is integrated into an automatic end-to-end system for calibrating single or multiple cameras. The evaluation shows that using the proposed mask achieves a mean reprojection error up to 78% lower than that using a heated chessboard. The effectiveness of the approach is further demonstrated by using it to calibrate two multiple-camera multiple-modality setups. Source code and binaries for the developed software are provided on the project Web site.
Resumo:
A wireless sensor network system must have the ability to tolerate harsh environmental conditions and reduce communication failures. In a typical outdoor situation, the presence of wind can introduce movement in the foliage. This motion of vegetation structures causes large and rapid signal fading in the communication link and must be accounted for when deploying a wireless sensor network system in such conditions. This thesis examines the fading characteristics experienced by wireless sensor nodes due to the effect of varying wind speed in a foliage obstructed transmission path. It presents extensive measurement campaigns at two locations with the approach of a typical wireless sensor networks configuration. The significance of this research lies in the varied approaches of its different experiments, involving a variety of vegetation types, scenarios and the use of different polarisations (vertical and horizontal). Non–line of sight (NLoS) scenario conditions investigate the wind effect based on different vegetation densities including that of the Acacia tree, Dogbane tree and tall grass. Whereas the line of sight (LoS) scenario investigates the effect of wind when the grass is swaying and affecting the ground-reflected component of the signal. Vegetation type and scenarios are envisaged to simulate real life working conditions of wireless sensor network systems in outdoor foliated environments. The results from the measurements are presented in statistical models involving first and second order statistics. We found that in most of the cases, the fading amplitude could be approximated by both Lognormal and Nakagami distribution, whose m parameter was found to depend on received power fluctuations. Lognormal distribution is known as the result of slow fading characteristics due to shadowing. This study concludes that fading caused by variations in received power due to wind in wireless sensor networks systems are found to be insignificant. There is no notable difference in Nakagami m values for low, calm, and windy wind speed categories. It is also shown in the second order analysis, the duration of the deep fades are very short, 0.1 second for 10 dB attenuation below RMS level for vertical polarization and 0.01 second for 10 dB attenuation below RMS level for horizontal polarization. Another key finding is that the received signal strength for horizontal polarisation demonstrates more than 3 dB better performances than the vertical polarisation for LoS and near LoS (thin vegetation) conditions and up to 10 dB better for denser vegetation conditions.
Resumo:
Raman spectroscopy, when used in spatially offset mode, has become a potential tool for the identification of explosives and other hazardous substances concealed in opaque containers. The molecular fingerprinting capability of Raman spectroscopy makes it an attractive tool for the unambiguous identification of hazardous substances in the field. Additionally, minimal sample preparation is required compared with other techniques. We report a field portable time resolved Raman sensor for the detection of concealed chemical hazards in opaque containers. The new sensor uses a pulsed nanosecond laser source in conjunction with an intensified CCD detector. The new sensor employs a combination of time and space resolved Raman spectroscopy to enhance the detection capability. The new sensor can identify concealed hazards by a single measurement without any chemometric data treatments.