926 resultados para Characterization techniques
Resumo:
This paper presents a comprehensive discussion of vegetation management approaches in power line corridors based on aerial remote sensing techniques. We address three issues 1) strategies for risk management in power line corridors, 2) selection of suitable platforms and sensor suite for data collection and 3) the progress in automated data processing techniques for vegetation management. We present initial results from a series of experiments and, challenges and lessons learnt from our project.
Resumo:
Purpose: To date, there have been no measuring techniques available that could clearly identify all phases of tear film surface kinetics in one interblink interval. ----- ----- Methods: Using a series of cases, we show that lateral shearing interferometry equipped with a set of robust parameter estimation techniques is able to characterize up to five different phases of tear film surface kinetics that include: (i) initial fast tear film build-up phase, (ii) further slower tear film build-up phase, (iii) tear film stability, (iv) tear film thinning, and (v), after a detected break-up, subsequent tear film deterioration. ----- ----- Results: Several representative examples are given for estimating tear film surface kinetics in measurements in which the subjects were asked to blink and keep their eyes open as long as they could. ----- ----- Conclusions: Lateral shearing interferometry is a noninvasive technique that provides means for temporal characterization of tear film surface kinetics and the opportunity for the analysis of the two-step tear film build-up process.
Resumo:
This paper presents a multiscale study using the coupled Meshless technique/Molecular Dynamics (M2) for exploring the deformation mechanism of mono-crystalline metal (focus on copper) under uniaxial tension. In M2, an advanced transition algorithm using transition particles is employed to ensure the compatibility of both displacements and their gradients, and an effective local quasi-continuum approach is also applied to obtain the equivalent continuum strain energy density based on the atomistic poentials and Cauchy-Born rule. The key parameters used in M2 are firstly investigated using a benchmark problem. Then M2 is applied to the multiscale simulation for a mono-crystalline copper bar. It has found that the mono-crystalline copper has very good elongation property, and the ultimate strength and Young's modulus are much higher than those obtained in macro-scale.
Resumo:
Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Applications of stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics, industrial automation and stereomicroscopy. A key issue in stereo vision is that of image matching, or identifying corresponding points in a stereo pair. The difference in the positions of corresponding points in image coordinates is termed the parallax or disparity. When the orientation of the two cameras is known, corresponding points may be projected back to find the location of the original object point in world coordinates. Matching techniques are typically categorised according to the nature of the matching primitives they use and the matching strategy they employ. This report provides a detailed taxonomy of image matching techniques, including area based, transform based, feature based, phase based, hybrid, relaxation based, dynamic programming and object space methods. A number of area based matching metrics as well as the rank and census transforms were implemented, in order to investigate their suitability for a real-time stereo sensor for mining automation applications. The requirements of this sensor were speed, robustness, and the ability to produce a dense depth map. The Sum of Absolute Differences matching metric was the least computationally expensive; however, this metric was the most sensitive to radiometric distortion. Metrics such as the Zero Mean Sum of Absolute Differences and Normalised Cross Correlation were the most robust to this type of distortion but introduced additional computational complexity. The rank and census transforms were found to be robust to radiometric distortion, in addition to having low computational complexity. They are therefore prime candidates for a matching algorithm for a stereo sensor for real-time mining applications. A number of issues came to light during this investigation which may merit further work. These include devising a means to evaluate and compare disparity results of different matching algorithms, and finding a method of assigning a level of confidence to a match. Another issue of interest is the possibility of statistically combining the results of different matching algorithms, in order to improve robustness.
Resumo:
Vector field visualisation is one of the classic sub-fields of scientific data visualisation. The need for effective visualisation of flow data arises in many scientific domains ranging from medical sciences to aerodynamics. Though there has been much research on the topic, the question of how to communicate flow information effectively in real, practical situations is still largely an unsolved problem. This is particularly true for complex 3D flows. In this presentation we give a brief introduction and background to vector field visualisation and comment on the effectiveness of the most common solutions. We will then give some examples of current development on texture-based techniques, and given practical examples of their use in CFD research and hydrodynamic applications.
Resumo:
Road surface macro-texture is an indicator used to determine the skid resistance levels in pavements. Existing methods of quantifying macro-texture include the sand patch test and the laser profilometer. These methods utilise the 3D information of the pavement surface to extract the average texture depth. Recently, interest in image processing techniques as a quantifier of macro-texture has arisen, mainly using the Fast Fourier Transform (FFT). This paper reviews the FFT method, and then proposes two new methods, one using the autocorrelation function and the other using wavelets. The methods are tested on pictures obtained from a pavement surface extending more than 2km's. About 200 images were acquired from the surface at approx. 10m intervals from a height 80cm above ground. The results obtained from image analysis methods using the FFT, the autocorrelation function and wavelets are compared with sensor measured texture depth (SMTD) data obtained from the same paved surface. The results indicate that coefficients of determination (R2) exceeding 0.8 are obtained when up to 10% of outliers are removed.
Resumo:
Eigen-based techniques and other monolithic approaches to face recognition have long been a cornerstone in the face recognition community due to the high dimensionality of face images. Eigen-face techniques provide minimal reconstruction error and limit high-frequency content while linear discriminant-based techniques (fisher-faces) allow the construction of subspaces which preserve discriminatory information. This paper presents a frequency decomposition approach for improved face recognition performance utilising three well-known techniques: Wavelets; Gabor / Log-Gabor; and the Discrete Cosine Transform. Experimentation illustrates that frequency domain partitioning prior to dimensionality reduction increases the information available for classification and greatly increases face recognition performance for both eigen-face and fisher-face approaches.
Resumo:
In order to achieve meaningful reductions in individual ecological footprints, individuals must dramatically alter their day to day behaviours. Effective interventions will need to be evidence based and there is a necessity for the rapid transfer or communication of information from the point of research, into policy and practice. A number of health disciplines, including psychology and public health, share a common mission to promote health and well-being and it is becoming clear that the most practical pathway to achieving this mission is through interdisciplinary collaboration. This paper argues that an interdisciplinary collaborative approach will facilitate research that results in the rapid transfer of findings into policy and practice. The application of this approach is described in relation to the Green Living project which explored the psycho-social predictors of environmentally friendly behaviour. Following a qualitative pilot study, and in consultation with an expert panel comprising academics, industry professionals and government representatives, a self-administered mail survey was distributed to a random sample of 3000 residents of Brisbane and Moreton Bay (Queensland, Australia). The Green Living survey explored specific beliefs which included attitudes, norms, perceived control, intention and behaviour, as well as a number of other constructs such as environmental concern and altruism. This research has two beneficial outcomes. First, it will inform a practical model for predicting sustainable living behaviours and a number of local councils have already expressed an interest in making use of the results as part of their ongoing community engagement programs. Second, it provides an example of how a collaborative interdisciplinary project can provide a more comprehensive approach to research than can be accomplished by a single disciplinary project.
Resumo:
For a biomaterial to be considered suitable for bone repair it should ideally be both bioactive and have a capacity for controllable drug delivery; as such, mesoporous SiO2 glass has been proposed as a new class of bone regeneration material by virtue of its high drug-loading ability and generally good biocompatibility. It does, however, have less than optimum bioactivity and controllable drug delivery properties. In this study, we incorporated strontium (Sr) into mesoporous SiO2 in an effort to develop a bioactive mesoporous SrO–SiO2 (Sr–Si) glass with the capacity to deliver Sr2+ ions, as well as a drug, at a controlled rate, thereby producing a material better suited for bone repair. The effects of Sr2+ on the structure, physiochemistry, drug delivery and biological properties of mesoporous Sr–Si glass were investigated. The prepared mesoporous Sr–Si glass was found to have an excellent release profile of bioactive Sr2+ ions and dexamethasone, and the incorporation of Sr2+ improved structural properties, such as mesopore size, pore volume and specific surface area, as well as rate of dissolution and protein adsorption. The mesoporous Sr–Si glass had no cytotoxic effects and its release of Sr2+ and SiO44− ions enhanced alkaline phosphatase activity – a marker of osteogenic cell differentiation – in human bone mesenchymal stem cells. Mesoporous Sr–Si glasses can be prepared to porous scaffolds which show a more sustained drug release. This study suggests that incorporating Sr2+ into mesoporous SiO2 glass produces a material with a more optimal drug delivery profile coupled with improved bioactivity, making it an excellent material for bone repair applications. Keywords: Mesoporous Sr–Si glass; Drug delivery; Bioactivity; Bone repair; Scaffolds
Resumo:
Understanding the motion characteristics of on-site objects is desirable for the analysis of construction work zones, especially in problems related to safety and productivity studies. This article presents a methodology for rapid object identification and tracking. The proposed methodology contains algorithms for spatial modeling and image matching. A high-frame-rate range sensor was utilized for spatial data acquisition. The experimental results indicated that an occupancy grid spatial modeling algorithm could quickly build a suitable work zone model from the acquired data. The results also showed that an image matching algorithm is able to find the most similar object from a model database and from spatial models obtained from previous scans. It is then possible to use the matched information to successfully identify and track objects.
Resumo:
A significant proportion of the cost of software development is due to software testing and maintenance. This is in part the result of the inevitable imperfections due to human error, lack of quality during the design and coding of software, and the increasing need to reduce faults to improve customer satisfaction in a competitive marketplace. Given the cost and importance of removing errors improvements in fault detection and removal can be of significant benefit. The earlier in the development process faults can be found, the less it costs to correct them and the less likely other faults are to develop. This research aims to make the testing process more efficient and effective by identifying those software modules most likely to contain faults, allowing testing efforts to be carefully targeted. This is done with the use of machine learning algorithms which use examples of fault prone and not fault prone modules to develop predictive models of quality. In order to learn the numerical mapping between module and classification, a module is represented in terms of software metrics. A difficulty in this sort of problem is sourcing software engineering data of adequate quality. In this work, data is obtained from two sources, the NASA Metrics Data Program, and the open source Eclipse project. Feature selection before learning is applied, and in this area a number of different feature selection methods are applied to find which work best. Two machine learning algorithms are applied to the data - Naive Bayes and the Support Vector Machine - and predictive results are compared to those of previous efforts and found to be superior on selected data sets and comparable on others. In addition, a new classification method is proposed, Rank Sum, in which a ranking abstraction is laid over bin densities for each class, and a classification is determined based on the sum of ranks over features. A novel extension of this method is also described based on an observed polarising of points by class when rank sum is applied to training data to convert it into 2D rank sum space. SVM is applied to this transformed data to produce models the parameters of which can be set according to trade-off curves to obtain a particular performance trade-off.