210 resultados para laser processing


Relevância:

20.00% 20.00%

Publicador:

Resumo:

YBCO thin films were fabricated by laser deposition, in situ on MgO substrates, using both O2 and N2O as process gas. Films with Tc above 90 K and jc of 106 A/cm2 at 77 K were grown in oxygen at a substrate temperature of 765 °C. Using N2O, the optimum substrate temperature was 745 °C, giving a Tc of 87 K. At lower temperatures, the films made in N2O had higher Tc (79 K) than the films made in oxygen (66 K). SEM and STM investigations of the film surfaces showed the films to consist of a comparatively smooth background surface and a distribution of larger particles. Both the particle size and the distribution density depended on the substrate temperature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE To investigate the utility of using non-contact laser-scanning confocal microscopy (NC-LSCM), compared with the more conventional contact laser-scanning confocal microscopy (C-LSCM), for examining corneal substructures in vivo. METHODS An attempt was made to capture representative images from the tear film and all layers of the cornea of a healthy, 35 year old female, using both NC-LSCM and C-LSCM, on separate days. RESULTS Using NC-LSCM, good quality images were obtained of the tear film, stroma, and a section of endothelium, but the corneal depth of the images of these various substructures could not be ascertained. Using C-LSCM, good quality, full-field images were obtained of the epithelium, subbasal nerve plexus, stroma, and endothelium, and the corneal depth of each of the captured images could be ascertained. CONCLUSIONS NC-LSCM may find general use for clinical examination of the tear film, stroma and endothelium, with the caveat that the depth of stromal images cannot be determined when using this technique. This technique also facilitates image capture of oblique sections of multiple corneal layers. The inability to clearly and consistently image thin corneal substructures - such as the tear film, subbasal nerve plexus and endothelium - is a key limitation of NC-LSCM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The selection of optimal camera configurations (camera locations, orientations, etc.) for multi-camera networks remains an unsolved problem. Previous approaches largely focus on proposing various objective functions to achieve different tasks. Most of them, however, do not generalize well to large scale networks. To tackle this, we propose a statistical framework of the problem as well as propose a trans-dimensional simulated annealing algorithm to effectively deal with it. We compare our approach with a state-of-the-art method based on binary integer programming (BIP) and show that our approach offers similar performance on small scale problems. However, we also demonstrate the capability of our approach in dealing with large scale problems and show that our approach produces better results than two alternative heuristics designed to deal with the scalability issue of BIP. Last, we show the versatility of our approach using a number of specific scenarios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The diagnostics of mechanical components operating in transient conditions is still an open issue, in both research and industrial field. Indeed, the signal processing techniques developed to analyse stationary data are not applicable or are affected by a loss of effectiveness when applied to signal acquired in transient conditions. In this paper, a suitable and original signal processing tool (named EEMED), which can be used for mechanical component diagnostics in whatever operating condition and noise level, is developed exploiting some data-adaptive techniques such as Empirical Mode Decomposition (EMD), Minimum Entropy Deconvolution (MED) and the analytical approach of the Hilbert transform. The proposed tool is able to supply diagnostic information on the basis of experimental vibrations measured in transient conditions. The tool has been originally developed in order to detect localized faults on bearings installed in high speed train traction equipments and it is more effective to detect a fault in non-stationary conditions than signal processing tools based on spectral kurtosis or envelope analysis, which represent until now the landmark for bearings diagnostics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The signal processing techniques developed for the diagnostics of mechanical components operating in stationary conditions are often not applicable or are affected by a loss of effectiveness when applied to signals measured in transient conditions. In this chapter, an original signal processing tool is developed exploiting some data-adaptive techniques such as Empirical Mode Decomposition, Minimum Entropy Deconvolution and the analytical approach of the Hilbert transform. The tool has been developed to detect localized faults on bearings of traction systems of high speed trains and it is more effective to detect a fault in non-stationary conditions than signal processing tools based on envelope analysis or spectral kurtosis, which represent until now the landmark for bearings diagnostics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Incorporating a learner’s level of cognitive processing into Learning Analytics presents opportunities for obtaining rich data on the learning process. We propose a framework called COPA that provides a basis for mapping levels of cognitive operation into a learning analytics system. We utilise Bloom’s taxonomy, a theoretically respected conceptualisation of cognitive processing, and apply it in a flexible structure that can be implemented incrementally and with varying degree of complexity within an educational organisation. We outline how the framework is applied, and its key benefits and limitations. Finally, we apply COPA to a University undergraduate unit, and demonstrate its utility in identifying key missing elements in the structure of the course.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sugar cane processing sites are characterised by high sugar/hemicellulose levels, available moisture and warm conditions, and are relatively unexplored unique microbial environments. The PhyloChip microarray was used to investigate bacterial diversity and community composition in three Australian sugar cane processing plants. These ecosystems were highly complex and dominated by four main Phyla, Firmicutes (the most dominant), followed by Proteobacteria, Bacteroidetes, and Chloroflexi. Significant variation (p , 0.05) in community structure occurred between samples collected from ‘floor dump sediment’, ‘cooling tower water’, and ‘bagasse leachate’. Many bacterial Classes contributed to these differences, however most were of low numerical abundance. Separation in community composition was also linked to Classes of Firmicutes, particularly Bacillales, Lactobacillales and Clostridiales, whose dominance is likely to be linked to their physiology as ‘lactic acid bacteria’, capable of fermenting the sugars present. This process may help displace other bacterial taxa, providing a competitive advantage for Firmicutes bacteria.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents large, accurately calibrated and time-synchronised datasets, gathered outdoors in controlled environmental conditions, using an unmanned ground vehicle (UGV), equipped with a wide variety of sensors. It discusses how the data collection process was designed, the conditions in which these datasets have been gathered, and some possible outcomes of their exploitation, in particular for the evaluation of performance of sensors and perception algorithms for UGVs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This document describes large, accurately calibrated and time-synchronised datasets, gathered in controlled environmental conditions, using an unmanned ground vehicle equipped with a wide variety of sensors. These sensors include: multiple laser scanners, a millimetre wave radar scanner, a colour camera and an infra-red camera. Full details of the sensors are given, as well as the calibration parameters needed to locate them with respect to each other and to the platform. This report also specifies the format and content of the data, and the conditions in which the data have been gathered. The data collection was made in two different situations of the vehicle: static and dynamic. The static tests consisted of sensing a fixed ’reference’ terrain, containing simple known objects, from a motionless vehicle. For the dynamic tests, data were acquired from a moving vehicle in various environments, mainly rural, including an open area, a semi-urban zone and a natural area with different types of vegetation. For both categories, data have been gathered in controlled environmental conditions, which included the presence of dust, smoke and rain. Most of the environments involved were static, except for a few specific datasets which involve the presence of a walking pedestrian. Finally, this document presents illustrations of the effects of adverse environmental conditions on sensor data, as a first step towards reliability and integrity in autonomous perceptual systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Operating in vegetated environments is a major challenge for autonomous robots. Obstacle detection based only on geometric features causes the robot to consider foliage, for example, small grass tussocks that could be easily driven through, as obstacles. Classifying vegetation does not solve this problem since there might be an obstacle hidden behind the vegetation. In addition, dense vegetation typically needs to be considered as an obstacle. This paper addresses this problem by augmenting probabilistic traversability map constructed from laser data with ultra-wideband radar measurements. An adaptive detection threshold and a probabilistic sensor model are developed to convert the radar data to occupancy probabilities. The resulting map captures the fine resolution of the laser map but clears areas from the traversability map that are induced by obstacle-free foliage. Experimental results validate that this method is able to improve the accuracy of traversability maps in vegetated environments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reliable robotic perception and planning are critical to performing autonomous actions in uncertain, unstructured environments. In field robotic systems, automation is achieved by interpreting exteroceptive sensor information to infer something about the world. This is then mapped to provide a consistent spatial context, so that actions can be planned around the predicted future interaction of the robot and the world. The whole system is as reliable as the weakest link in this chain. In this paper, the term mapping is used broadly to describe the transformation of range-based exteroceptive sensor data (such as LIDAR or stereo vision) to a fixed navigation frame, so that it can be used to form an internal representation of the environment. The coordinate transformation from the sensor frame to the navigation frame is analyzed to produce a spatial error model that captures the dominant geometric and temporal sources of mapping error. This allows the mapping accuracy to be calculated at run time. A generic extrinsic calibration method for exteroceptive range-based sensors is then presented to determine the sensor location and orientation. This allows systematic errors in individual sensors to be minimized, and when multiple sensors are used, it minimizes the systematic contradiction between them to enable reliable multisensor data fusion. The mathematical derivations at the core of this model are not particularly novel or complicated, but the rigorous analysis and application to field robotics seems to be largely absent from the literature to date. The techniques in this paper are simple to implement, and they offer a significant improvement to the accuracy, precision, and integrity of mapped information. Consequently, they should be employed whenever maps are formed from range-based exteroceptive sensor data. © 2009 Wiley Periodicals, Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present large, accurately calibrated and time-synchronized data sets, gathered outdoors in controlled and variable environmental conditions, using an unmanned ground vehicle (UGV), equipped with a wide variety of sensors. These include four 2D laser scanners, a radar scanner, a color camera and an infrared camera. It provides a full description of the system used for data collection and the types of environments and conditions in which these data sets have been gathered, which include the presence of airborne dust, smoke and rain.

Relevância:

20.00% 20.00%

Publicador: