105 resultados para SP-laser interference
Resumo:
The Clay Minerals Society Source Clay kaolinites, Georgia KGa-1 and KGa-2, have been subjected to particle size determinations by 1) conventional sedimentation methods, 2) electron microscopy and image analysis, and 3) laser scattering using improved algorithms for the interaction of light with small particles. Particle shape, size distribution, and crystallinity vary considerably for each kaolinite. Replicate analyses of separated size fractions showed that in the <2 µm range, the sedimentation/centrifugation method of Tanner and Jackson (1947) is reproducible for different kaolinite types and that the calculated size ranges are in reasonable agreement with the size bins estimated from laser scattering. Particle sizes determined by laser scattering must be calculated using Mie theory when the dominant particle size is less than ∼5 µm. Based on this study of two well-known and structurally different kaolinites, laser scattering, with improved data reduction algorithms that include Mie theory, should be considered an internally consistent and rapid technique for clay particle sizing.
Resumo:
This collaborative project by Daniel Mafe and Andrew Brown, one of a number in they have been involved in together, conjoins painting and digital sound into a single, large scale, immersive exhibition/installation. The work as a whole acts as an interstitial point between contrasting approaches to abstraction: the visual and aural, the digital and analogue are pushed into an alliance and each works to alter perceptions of the other. For example, the paintings no longer mutely sit on the wall to be stared into. The sound seemingly emanating from each work shifts the viewer’s typical visual perception and engages their aural sensibilities. This seems to make one more aware of the objects as objects – the surface of each piece is brought into scrutiny – and immerses the viewer more viscerally within the exhibition. Similarly, the sonic experience is focused and concentrated spatially by each painted piece even as the exhibition is dispersed throughout the space. The sounds and images are similar in each local but not identical, even though they may seem to be the same from casual interaction, closer attention will quickly show this is not the case. In preparing this exhibition each artist has had to shift their mode of making to accommodate the other’s contribution. This was mainly done by a process of emptying whereby each was called upon to do less to the works they were making and to iterate the works toward a shared conception, blurring notions of individual imagination while maintaining material authorship. Empting was necessary to enable sufficient porosity where each medium allowed the other entry to its previously gated domain. The paintings are simple and subtle to allow the odd sonic textures a chance to work on the viewer’s engagement with them. The sound remains both abstract, using noise-like textures, and at a low volume to allow the audience’s attention to wander back and forth between aspects of the works.
Resumo:
The domestication of creative software and hardware has been a significant factor in the recent proliferation of still and moving image creation. Booming numbers of amateur image-makers have the resources, skills and ambitions to create and distribute their work on a mass scale. At the same time, contemporary art seems increasingly dominated by ‘post-medium’ practices that adopt and adapt the representational techniques of mass culture, rather than overtly reject or oppose them. As a consequence of this network of forces, the field of image and video production is no longer the exclusive specialty of art and the mass media, and art may no longer be the most prominent watchdog of mass image culture. Intuitively and intentionally, contemporary artists are responding to these shifting conditions. From the position of a creative practitioner and researcher, this paper examines the strategies that contemporary artists use to engage with the changing relationships between image culture, lived experience and artistic practice. By examining the intersections between W.J.T. Mitchell’s detailed understanding of visual literacy and Jacques Derrida’s philosophical models of reading and writing, I identify ‘editing’ as a broad methodology that describes how practitioners creatively and critically engage with the field of still and moving images. My contention is that by emphasising the intersections of looking and making, ‘reading’ and ‘writing’, artists provide crucial jump cuts, pauses and distortions in the medley of our mediated experiences.
Resumo:
Purpose: The measurement of broadband ultrasonic attenuation (BUA) in cancellous bone for the assessment of osteoporosis follows a parabolic-type dependence with bone volume fraction; having minima values corresponding to both entire bone and entire marrow. Langton has recently proposed that the primary BUA mechanism may be significant phase interference due to variations in propagation transit time through the test sample as detected over the phase-sensitive surface of the receive ultrasound transducer. This fundamentally simple concept assumes that the propagation of ultrasound through a complex solid : liquid composite sample such as cancellous bone may be considered by an array of parallel ‘sonic rays’. The transit time of each ray is defined by the proportion of bone and marrow propagated, being a minimum (tmin) solely through bone and a maximum (tmax) solely through marrow. A Transit Time Spectrum (TTS), ranging from tmin to tmax, may be defined describing the proportion of sonic rays having a particular transit time, effectively describing lateral inhomogeneity of transit time over the surface of the receive ultrasound transducer. Phase interference may result from interaction of ‘sonic rays’ of differing transit times. The aim of this study was to test the hypothesis that there is a dependence of phase interference upon the lateral inhomogenity of transit time by comparing experimental measurements and computer simulation predictions of ultrasound propagation through a range of relatively simplistic solid:liquid models exhibiting a range of lateral inhomogeneities. Methods: A range of test models was manufactured using acrylic and water as surrogates for bone and marrow respectively. The models varied in thickness in one dimension normal to the direction of propagation, hence exhibiting a range of transit time lateral inhomogeneities, ranging from minimal (single transit time) to maximal (wedge; ultimately the limiting case where each sonic ray has a unique transit time). For the experimental component of the study, two unfocused 1 MHz ¾” broadband diameter transducers were utilized in transmission mode; ultrasound signals were recorded for each of the models. The computer simulation was performed with Matlab, where the transit time and relative amplitude of each sonic ray was calculated. The transit time for each sonic ray was defined as the sum of transit times through acrylic and water components. The relative amplitude considered the reception area for each sonic ray along with absorption in the acrylic. To replicate phase-sensitive detection, all sonic rays were summed and the output signal plotted in comparison with the experimentally derived output signal. Results: From qualtitative and quantitative comparison of the experimental and computer simulation results, there is an extremely high degree of agreement of 94.2% to 99.0% between the two approaches, supporting the concept that propagation of an ultrasound wave, for the models considered, may be approximated by a parallel sonic ray model where the transit time of each ray is defined by the proportion of ‘bone’ and ‘marrow’. Conclusions: This combined experimental and computer simulation study has successfully demonstrated that lateral inhomogeneity of transit time has significant potential for phase interference to occur if a phase-sensitive ultrasound receive transducer is implemented as in most commercial ultrasound bone analysis devices.
Resumo:
Balcony acoustic treatments can mitigate the effects of community road traffic noise. To further investigate, a theoretical study into the effects of balcony acoustic treatment combinations on speech interference and transmission is conducted for various street geometries. Nine different balcony types are investigated using a combined specular and diffuse reflection computer model. Diffusion in the model is calculated using the radiosity technique. The balcony types include a standard balcony with or without a ceiling and with various combinations of parapet, ceiling absorption and ceiling shield. A total of 70 balcony and street geometrical configurations are analyzed with each balcony type, resulting in 630 scenarios. In each scenario the reverberation time, speech interference level (SIL) and speech transmission index (STI) are calculated. These indicators are compared to determine trends based on the effects of propagation path, inclusion of opposite buildings and difference with a reference position outside the balcony. The results demonstrate trends in SIL and STI with different balcony types. It is found that an acoustically treated balcony reduces speech interference. A parapet provides the largest improvement, followed by absorption on the ceiling. The largest reductions in speech interference arise when a combination of balcony acoustic treatments are applied.
Resumo:
Residential balcony design influences speech interference levels caused by road traffic noise and a simplified design methodology is needed for optimising balcony acoustic treatments. This research comprehensively assesses speech interference levels and benefits of nine different balcony designs situated in urban street canyons through the use of a combined direct, specular reflection and diffuse reflection path theoretical model. This thesis outlines the theory, analysis and results that lead up to the presentation of a practical design guide which can be used to predict the acoustic effects of balcony geometry and acoustic treatments in streets with variable geometry and acoustic characteristics.
Resumo:
YBCO thin films were fabricated by laser deposition, in situ on MgO substrates, using both O2 and N2O as process gas. Films with Tc above 90 K and jc of 106 A/cm2 at 77 K were grown in oxygen at a substrate temperature of 765 °C. Using N2O, the optimum substrate temperature was 745 °C, giving a Tc of 87 K. At lower temperatures, the films made in N2O had higher Tc (79 K) than the films made in oxygen (66 K). SEM and STM investigations of the film surfaces showed the films to consist of a comparatively smooth background surface and a distribution of larger particles. Both the particle size and the distribution density depended on the substrate temperature.
Resumo:
This article examines manual textual categorisation by human coders with the hypothesis that the law of total probability may be violated for difficult categories. An empirical evaluation was conducted to compare a one step categorisation task with a two step categorisation task using crowdsourcing. It was found that the law of total probability was violated. Both a quantum and classical probabilistic interpretations for this violation are presented. Further studies are required to resolve whether quantum models are more appropriate for this task.
Resumo:
RNA interference (RNAi) is widely used to silence genes in plants and animals. It operates through the degradation of target mRNA by endonuclease complexes guided by approximately 21 nucleotide (nt) short interfering RNAs (siRNAs). A similar process regulates the expression of some developmental genes through approximately 21 nt microRNAs. Plants have four types of Dicer-like (DCL) enzyme, each producing small RNAs with different functions. Here, we show that DCL2, DCL3 and DCL4 in Arabidopsis process both replicating viral RNAs and RNAi-inducing hairpin RNAs (hpRNAs) into 22-, 24- and 21 nt siRNAs, respectively, and that loss of both DCL2 and DCL4 activities is required to negate RNAi and to release the plant's repression of viral replication. We also show that hpRNAs, similar to viral infection, can engender long-distance silencing signals and that hpRNA-induced silencing is suppressed by the expression of a virus-derived suppressor protein. These findings indicate that hpRNA-mediated RNAi in plants operates through the viral defence pathway.
Resumo:
PURPOSE To investigate the utility of using non-contact laser-scanning confocal microscopy (NC-LSCM), compared with the more conventional contact laser-scanning confocal microscopy (C-LSCM), for examining corneal substructures in vivo. METHODS An attempt was made to capture representative images from the tear film and all layers of the cornea of a healthy, 35 year old female, using both NC-LSCM and C-LSCM, on separate days. RESULTS Using NC-LSCM, good quality images were obtained of the tear film, stroma, and a section of endothelium, but the corneal depth of the images of these various substructures could not be ascertained. Using C-LSCM, good quality, full-field images were obtained of the epithelium, subbasal nerve plexus, stroma, and endothelium, and the corneal depth of each of the captured images could be ascertained. CONCLUSIONS NC-LSCM may find general use for clinical examination of the tear film, stroma and endothelium, with the caveat that the depth of stromal images cannot be determined when using this technique. This technique also facilitates image capture of oblique sections of multiple corneal layers. The inability to clearly and consistently image thin corneal substructures - such as the tear film, subbasal nerve plexus and endothelium - is a key limitation of NC-LSCM.
Resumo:
This work considers the problem of building high-fidelity 3D representations of the environment from sensor data acquired by mobile robots. Multi-sensor data fusion allows for more complete and accurate representations, and for more reliable perception, especially when different sensing modalities are used. In this paper, we propose a thorough experimental analysis of the performance of 3D surface reconstruction from laser and mm-wave radar data using Gaussian Process Implicit Surfaces (GPIS), in a realistic field robotics scenario. We first analyse the performance of GPIS using raw laser data alone and raw radar data alone, respectively, with different choices of covariance matrices and different resolutions of the input data. We then evaluate and compare the performance of two different GPIS fusion approaches. The first, state-of-the-art approach directly fuses raw data from laser and radar. The alternative approach proposed in this paper first computes an initial estimate of the surface from each single source of data, and then fuses these two estimates. We show that this method outperforms the state of the art, especially in situations where the sensors react differently to the targets they perceive.
Resumo:
Field robots often rely on laser range finders (LRFs) to detect obstacles and navigate autonomously. Despite recent progress in sensing technology and perception algorithms, adverse environmental conditions, such as the presence of smoke, remain a challenging issue for these robots. In this paper, we investigate the possibility to improve laser-based perception applications by anticipating situations when laser data are affected by smoke, using supervised learning and state-of-the-art visual image quality analysis. We propose to train a k-nearest-neighbour (kNN) classifier to recognise situations where a laser scan is likely to be affected by smoke, based on visual data quality features. This method is evaluated experimentally using a mobile robot equipped with LRFs and a visual camera. The strengths and limitations of the technique are identified and discussed, and we show that the method is beneficial if conservative decisions are the most appropriate.
Resumo:
This paper presents an approach to promote the integrity of perception systems for outdoor unmanned ground vehicles (UGV) operating in challenging environmental conditions (presence of dust or smoke). The proposed technique automatically evaluates the consistency of the data provided by two sensing modalities: a 2D laser range finder and a millimetre-wave radar, allowing for perceptual failure mitigation. Experimental results, obtained with a UGV operating in rural environments, and an error analysis validate the approach.
Resumo:
Camera-laser calibration is necessary for many robotics and computer vision applications. However, existing calibration toolboxes still require laborious effort from the operator in order to achieve reliable and accurate results. This paper proposes algorithms that augment two existing trustful calibration methods with an automatic extraction of the calibration object from the sensor data. The result is a complete procedure that allows for automatic camera-laser calibration. The first stage of the procedure is automatic camera calibration which is useful in its own right for many applications. The chessboard extraction algorithm it provides is shown to outperform openly available techniques. The second stage completes the procedure by providing automatic camera-laser calibration. The procedure has been verified by extensive experimental tests with the proposed algorithms providing a major reduction in time required from an operator in comparison to manual methods.
Resumo:
This work aims to promote integrity in autonomous perceptual systems, with a focus on outdoor unmanned ground vehicles equipped with a camera and a 2D laser range finder. A method to check for inconsistencies between the data provided by these two heterogeneous sensors is proposed and discussed. First, uncertainties in the estimated transformation between the laser and camera frames are evaluated and propagated up to the projection of the laser points onto the image. Then, for each pair of laser scan-camera image acquired, the information at corners of the laser scan is compared with the content of the image, resulting in a likelihood of correspondence. The result of this process is then used to validate segments of the laser scan that are found to be consistent with the image, while inconsistent segments are rejected. Experimental results illustrate how this technique can improve the reliability of perception in challenging environmental conditions, such as in the presence of airborne dust.