970 resultados para generel edge model


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The edge-to-edge matching model, which was originally developed for predicting crystallographic features in diffusional phase transformations in solids, has been used to understand the formation of in-plane textures in TiSi2 (C49) thin films on Si single crystal (001)si surface. The model predicts all the four previously reported orientation relationships between C49 and Si substrate based on the actual atom matching across the interface and the basic crystallographic data only. The model has strong potential to be used to develop new thin film materials. (c) 2006 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The orientation relationship (OR) between the beta(Zn) phase and the alpha(Al) phase and the corresponding habit planes in a Zn-Al eutectoid alloy were accurately determined using convergent beam Kikuchi line diffraction patterns. In addition to the previously reported OR. [11 (2) over bar0](beta)parallel to[110](alpha), (0002)(beta)parallel to ((1) over bar 11)alpha, two new ORs were observed. They are: [11 (2) over bar0](beta)parallel to [110], ((1) over bar 101)(beta) 0.82 degrees from (002)(alpha) and [(1) over bar 100](beta)parallel to[112](alpha), (0002)(beta) 4.5 degrees from (111)(alpha). These ORs can be explained and understood using the recently developed edge-to-edge matching model. (c) 2006 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Feature detection is a crucial stage of visual processing. In previous feature-marking experiments we found that peaks in the 3rd derivative of the luminance profile can signify edges where there are no 1st derivative peaks nor 2nd derivative zero-crossings (Wallis and George 'Mach edges' (the edges of Mach bands) were nicely predicted by a new nonlinear model based on 3rd derivative filtering. As a critical test of the model, we now use a new class of stimuli, formed by adding a linear luminance ramp to the blurred triangle waves used previously. The ramp has no effect on the second or higher derivatives, but the nonlinear model predicts a shift from seeing two edges to seeing only one edge as the added ramp gradient increases. In experiment 1, subjects judged whether one or two edges were visible on each trial. In experiment 2, subjects used a cursor to mark perceived edges and bars. The position and polarity of the marked edges were close to model predictions. Both experiments produced the predicted shift from two to one Mach edge, but the shift was less complete than predicted. We conclude that the model is a useful predictor of edge perception, but needs some modification.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A multi-scale model of edge coding based on normalized Gaussian derivative filters successfully predicts perceived scale (blur) for a wide variety of edge profiles [Georgeson, M. A., May, K. A., Freeman, T. C. A., & Hesse, G. S. (in press). From filters to features: Scale-space analysis of edge and blur coding in human vision. Journal of Vision]. Our model spatially differentiates the luminance profile, half-wave rectifies the 1st derivative, and then differentiates twice more, to give the 3rd derivative of all regions with a positive gradient. This process is implemented by a set of Gaussian derivative filters with a range of scales. Peaks in the inverted normalized 3rd derivative across space and scale indicate the positions and scales of the edges. The edge contrast can be estimated from the height of the peak. The model provides a veridical estimate of the scale and contrast of edges that have a Gaussian integral profile. Therefore, since scale and contrast are independent stimulus parameters, the model predicts that the perceived value of either of these parameters should be unaffected by changes in the other. This prediction was found to be incorrect: reducing the contrast of an edge made it look sharper, and increasing its scale led to a decrease in the perceived contrast. Our model can account for these effects when the simple half-wave rectifier after the 1st derivative is replaced by a smoothed threshold function described by two parameters. For each subject, one pair of parameters provided a satisfactory fit to the data from all the experiments presented here and in the accompanying paper [May, K. A. & Georgeson, M. A. (2007). Added luminance ramp alters perceived edge blur and contrast: A critical test for derivative-based models of edge coding. Vision Research, 47, 1721-1731]. Thus, when we allow for the visual system's insensitivity to very shallow luminance gradients, our multi-scale model can be extended to edge coding over a wide range of contrasts and blurs. © 2007 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Blurred edges appear sharper in motion than when they are stationary. We proposed a model of this motion sharpening that invokes a local, nonlinear contrast transducer function (Hammett et al, 1998 Vision Research 38 2099-2108). Response saturation in the transducer compresses or 'clips' the input spatial waveform, rendering the edges as sharper. To explain the increasing distortion of drifting edges at higher speeds, the degree of nonlinearity must increase with speed or temporal frequency. A dynamic contrast gain control before the transducer can account for both the speed dependence and approximate contrast invariance of motion sharpening (Hammett et al, 2003 Vision Research, in press). We show here that this model also predicts perceived sharpening of briefly flashed and flickering edges, and we show that the model can account fairly well for experimental data from all three modes of presentation (motion, flash, and flicker). At moderate durations and lower temporal frequencies the gain control attenuates the input signal, thus protecting it from later compression by the transducer. The gain control is somewhat sluggish, and so it suffers both a slow onset, and loss of power at high temporal frequencies. Consequently, brief presentations and high temporal frequencies of drift and flicker are less protected from distortion, and show greater perceptual sharpening.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We describe a template model for perception of edge blur and identify a crucial early nonlinearity in this process. The main principle is to spatially filter the edge image to produce a 'signature', and then find which of a set of templates best fits that signature. Psychophysical blur-matching data strongly support the use of a second-derivative signature, coupled to Gaussian first-derivative templates. The spatial scale of the best-fitting template signals the edge blur. This model predicts blur-matching data accurately for a wide variety of Gaussian and non-Gaussian edges, but it suffers a bias when edges of opposite sign come close together in sine-wave gratings and other periodic images. This anomaly suggests a second general principle: the region of an image that 'belongs' to a given edge should have a consistent sign or direction of luminance gradient. Segmentation of the gradient profile into regions of common sign is achieved by implementing the second-derivative 'signature' operator as two first-derivative operators separated by a half-wave rectifier. This multiscale system of nonlinear filters predicts perceived blur accurately for periodic and aperiodic waveforms. We also outline its extension to 2-D images and infer the 2-D shape of the receptive fields.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The RatSLAM system can perform vision based SLAM using a computational model of the rodent hippocampus. When the number of pose cells used to represent space in RatSLAM is reduced, artifacts are introduced that hinder its use for goal directed navigation. This paper describes a new component for the RatSLAM system called an experience map, which provides a coherent representation for goal directed navigation. Results are presented for two sets of real world experiments, including comparison with the original goal memory system's performance in the same environment. Preliminary results are also presented demonstrating the ability of the experience map to adapt to simple short term changes in the environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The quality of stormwater runoff from ports is significant as it can be an important source of pollution to the marine environment. This is also a significant issue for the Port of Brisbane as it is located in an area of high environmental values. Therefore, it is imperative to develop an in-depth understanding of stormwater runoff quality to ensure that appropriate strategies are in place for quality improvement, where necessary. To this end, the Port of Brisbane Corporation aimed to develop a port specific stormwater model for the Fisherman Islands facility. The need has to be considered in the context of the proposed future developments of the Port area. ----------------- The Project: The research project is an outcome of the collaborative Partnership between the Port of Brisbane Corporation (POBC) and Queensland University of Technology (QUT). A key feature of this Partnership is that it seeks to undertake research to assist the Port in strengthening the environmental custodianship of the Port area through ‘cutting edge’ research and its translation into practical application. ------------------ The project was separated into two stages. The first stage developed a quantitative understanding of the generation potential of pollutant loads in the existing land uses. This knowledge was then used as input for the stormwater quality model developed in the subsequent stage. The aim is to expand this model across the yet to be developed port expansion area. This is in order to predict pollutant loads associated with stormwater flows from this area with the longer term objective of contributing to the development of ecological risk mitigation strategies for future expansion scenarios. ----------------- Study approach: Stage 1 of the overall study confirmed that Port land uses are unique in terms of the anthropogenic activities occurring on them. This uniqueness in land use results in distinctive stormwater quality characteristics different to other conventional urban land uses. Therefore, it was not scientifically valid to consider the Port as belonging to a single land use category or to consider as being similar to any typical urban land use. The approach adopted in this study was very different to conventional modelling studies where modelling parameters are developed using calibration. The field investigations undertaken in Stage 1 of the overall study helped to create fundamental knowledge on pollutant build-up and wash-off in different Port land uses. This knowledge was then used in computer modelling so that the specific characteristics of pollutant build-up and wash-off can be replicated. This meant that no calibration processes were involved due to the use of measured parameters for build-up and wash-off. ---------------- Conclusions: Stage 2 of the study was primarily undertaken using the SWMM stormwater quality model. It is a physically based model which replicates natural processes as closely as possible. The time step used and catchment variability considered was adequate to accommodate the temporal and spatial variability of input parameters and the parameters used in the modelling reflect the true nature of rainfall-runoff and pollutant processes to the best of currently available knowledge. In this study, the initial loss values adopted for the impervious surfaces are relatively high compared to values noted in research literature. However, given the scientifically valid approach used for the field investigations, it is appropriate to adopt the initial losses derived from this study for future modelling of Port land uses. The relatively high initial losses will reduce the runoff volume generated as well as the frequency of runoff events significantly. Apart from initial losses, most of the other parameters used in SWMM modelling are generic to most modelling studies. Development of parameters for MUSIC model source nodes was one of the primary objectives of this study. MUSIC, uses the mean and standard deviation of pollutant parameters based on a normal distribution. However, based on the values generated in this study, the variation of Event Mean Concentrations (EMCs) for Port land uses within the given investigation period does not fit a normal distribution. This is possibly due to the fact that only one specific location was considered, namely the Port of Brisbane unlike in the case of the MUSIC model where a range of areas with different geographic and climatic conditions were investigated. Consequently, the assumptions used in MUSIC are not totally applicable for the analysis of water quality in Port land uses. Therefore, in using the parameters included in this report for MUSIC modelling, it is important to note that it may result in under or over estimations of annual pollutant loads. It is recommended that the annual pollutant load values given in the report should be used as a guide to assess the accuracy of the modelling outcomes. A step by step guide for using the knowledge generated from this study for MUSIC modelling is given in Table 4.6. ------------------ Recommendations: The following recommendations are provided to further strengthen the cutting edge nature of the work undertaken: * It is important to further validate the approach recommended for stormwater quality modelling at the Port. Validation will require data collection in relation to rainfall, runoff and water quality from the selected Port land uses. Additionally, the recommended modelling approach could be applied to a soon-to-be-developed area to assess ‘before’ and ‘after’ scenarios. * In the modelling study, TSS was adopted as the surrogate parameter for other pollutants. This approach was based on other urban water quality research undertaken at QUT. The validity of this approach should be further assessed for Port land uses. * The adoption of TSS as a surrogate parameter for other pollutants and the confirmation that the <150 m particle size range was predominant in suspended solids for pollutant wash-off gives rise to a number of important considerations. The ability of the existing structural stormwater mitigation measures to remove the <150 m particle size range need to be assessed. The feasibility of introducing source control measures as opposed to end-of-pipe measures for stormwater quality improvement may also need to be considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Uninhabited aerial vehicles (UAVs) are a cutting-edge technology that is at the forefront of aviation/aerospace research and development worldwide. Many consider their current military and defence applications as just a token of their enormous potential. Unlocking and fully exploiting this potential will see UAVs in a multitude of civilian applications and routinely operating alongside piloted aircraft. The key to realising the full potential of UAVs lies in addressing a host of regulatory, public relation, and technological challenges never encountered be- fore. Aircraft collision avoidance is considered to be one of the most important issues to be addressed, given its safety critical nature. The collision avoidance problem can be roughly organised into three areas: 1) Sense; 2) Detect; and 3) Avoid. Sensing is concerned with obtaining accurate and reliable information about other aircraft in the air; detection involves identifying potential collision threats based on available information; avoidance deals with the formulation and execution of appropriate manoeuvres to maintain safe separation. This thesis tackles the detection aspect of collision avoidance, via the development of a target detection algorithm that is capable of real-time operation onboard a UAV platform. One of the key challenges of the detection problem is the need to provide early warning. This translates to detecting potential threats whilst they are still far away, when their presence is likely to be obscured and hidden by noise. Another important consideration is the choice of sensors to capture target information, which has implications for the design and practical implementation of the detection algorithm. The main contributions of the thesis are: 1) the proposal of a dim target detection algorithm combining image morphology and hidden Markov model (HMM) filtering approaches; 2) the novel use of relative entropy rate (RER) concepts for HMM filter design; 3) the characterisation of algorithm detection performance based on simulated data as well as real in-flight target image data; and 4) the demonstration of the proposed algorithm's capacity for real-time target detection. We also consider the extension of HMM filtering techniques and the application of RER concepts for target heading angle estimation. In this thesis we propose a computer-vision based detection solution, due to the commercial-off-the-shelf (COTS) availability of camera hardware and the hardware's relatively low cost, power, and size requirements. The proposed target detection algorithm adopts a two-stage processing paradigm that begins with an image enhancement pre-processing stage followed by a track-before-detect (TBD) temporal processing stage that has been shown to be effective in dim target detection. We compare the performance of two candidate morphological filters for the image pre-processing stage, and propose a multiple hidden Markov model (MHMM) filter for the TBD temporal processing stage. The role of the morphological pre-processing stage is to exploit the spatial features of potential collision threats, while the MHMM filter serves to exploit the temporal characteristics or dynamics. The problem of optimising our proposed MHMM filter has been examined in detail. Our investigation has produced a novel design process for the MHMM filter that exploits information theory and entropy related concepts. The filter design process is posed as a mini-max optimisation problem based on a joint RER cost criterion. We provide proof that this joint RER cost criterion provides a bound on the conditional mean estimate (CME) performance of our MHMM filter, and this in turn establishes a strong theoretical basis connecting our filter design process to filter performance. Through this connection we can intelligently compare and optimise candidate filter models at the design stage, rather than having to resort to time consuming Monte Carlo simulations to gauge the relative performance of candidate designs. Moreover, the underlying entropy concepts are not constrained to any particular model type. This suggests that the RER concepts established here may be generalised to provide a useful design criterion for multiple model filtering approaches outside the class of HMM filters. In this thesis we also evaluate the performance of our proposed target detection algorithm under realistic operation conditions, and give consideration to the practical deployment of the detection algorithm onboard a UAV platform. Two fixed-wing UAVs were engaged to recreate various collision-course scenarios to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. Based on this collected data, our proposed detection approach was able to detect targets out to distances ranging from about 400m to 900m. These distances, (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning ahead of impact that approaches the 12.5 second response time recommended for human pilots. Furthermore, readily available graphic processing unit (GPU) based hardware is exploited for its parallel computing capabilities to demonstrate the practical feasibility of the proposed target detection algorithm. A prototype hardware-in- the-loop system has been found to be capable of achieving data processing rates sufficient for real-time operation. There is also scope for further improvement in performance through code optimisations. Overall, our proposed image-based target detection algorithm offers UAVs a cost-effective real-time target detection capability that is a step forward in ad- dressing the collision avoidance issue that is currently one of the most significant obstacles preventing widespread civilian applications of uninhabited aircraft. We also highlight that the algorithm development process has led to the discovery of a powerful multiple HMM filtering approach and a novel RER-based multiple filter design process. The utility of our multiple HMM filtering approach and RER concepts, however, extend beyond the target detection problem. This is demonstrated by our application of HMM filters and RER concepts to a heading angle estimation problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Discrete Markov random field models provide a natural framework for representing images or spatial datasets. They model the spatial association present while providing a convenient Markovian dependency structure and strong edge-preservation properties. However, parameter estimation for discrete Markov random field models is difficult due to the complex form of the associated normalizing constant for the likelihood function. For large lattices, the reduced dependence approximation to the normalizing constant is based on the concept of performing computationally efficient and feasible forward recursions on smaller sublattices which are then suitably combined to estimate the constant for the whole lattice. We present an efficient computational extension of the forward recursion approach for the autologistic model to lattices that have an irregularly shaped boundary and which may contain regions with no data; these lattices are typical in applications. Consequently, we also extend the reduced dependence approximation to these scenarios enabling us to implement a practical and efficient non-simulation based approach for spatial data analysis within the variational Bayesian framework. The methodology is illustrated through application to simulated data and example images. The supplemental materials include our C++ source code for computing the approximate normalizing constant and simulation studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gaining a competitive edge in the area of the engagement, success and retention of commencing students is a significant issue in higher education, made more so currently because of the considerable and increasing pressure on teaching and learning from the new standards framework and performance funding. This paper introduces the concept of maturity models (MMs) and their application to assessing the capability of higher education institutions (HEIs) to address student engagement, success and retention (SESR). A concise description of the features of maturity models is presented with reference to an SESR-MM currently being developed. The SESR-MM is proposed as a viable instrument for assisting HEIs in the management and improvement of their SESR activities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The railhead is severely stressed under the localized wheel contact patch close to the gaps in insulated rail joints. A modified railhead profile in the vicinity of the gapped joint, through a shape optimization model based on a coupled genetic algorithm and finite element method, effectively alters the contact zone and reduces the railhead edge stress concentration significantly. Two optimization methods, a grid search method and a genetic algorithm, were employed for this optimization problem. The optimal results from these two methods are discussed and, in particular, their suitability for the rail end stress minimization problem is studied. Through several numerical examples, the optimal profile is shown to be unaffected by either the magnitude or the contact position of the loaded wheel. The numerical results are validated through a large-scale experimental study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since 2007, KITE Arts Education Program @ QPAC has been engaged in a series of arts and drama-based experiences for students in selected primary schools on the edges of Brisbane and in regional Queensland. The in-school workshop experiences of the program have culminated in a performance by the children for their school community, parents and carers at the Queensland Performing Arts Centre or a regional cultural venue. In conducting an analysis of the Yonder project, the researcher aimed to provide evidence of outcomes brought about through participation by schools, school staff, students and their communities in the Yonder project. To develop longitudinal data project initiators, participants were interviewed at six-monthly intervals to establish patterns of engagement and participation. The report analyses arts-based workshops conducted by the teacher artist in edge-city Brisbane and a regional centre; interviews with teachers and school administrators from the participating schools; interviews with teacher artist and professional artists; interviews with community partners; teacher professional development workshops; community-based workshops; performance outcomes that were the culminating events of the workshop program; student work samples and student reflections on the program. This document covers data and project outputs from February 2010 to July 2012. There have been five iterations of the Yonder project since its commencement in mid-2009 — three in regional Queensland (February–April 2010; February–May 2011; February–May 2012) and two in edge-city1 Brisbane (July–September 2010; August–October 2011). This report is a result of a research partnership between Queensland Performing Arts Centre and Queensland University of Technology (QUT) Creative Industries Faculty(Drama).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wheel-rail rolling contact at railhead edge, such as a gap in an insulated rail joint, is a complex problem; there are only limited analytical, numerical and experimental studies available on this problem in the academic literature. This paper describes experimental and numerical investigations of railhead strains in the vicinity of the edge under the contact of a loaded wheel. A full-scale test rig was developed to cyclically apply wheel/rail rolling contact load to the edge zone of the railhead. An image analysis technique was employed to determine the railhead vertical, lateral and shear strain components. The vertical strains determined using the image analysis method have been validated with the strain gauge measurements and used for the calibration of a 3D nonlinear Finite Element Model (FEM) that simulates the wheel/rail contact at the railhead edge and use suitable boundary conditions commensurate to the experimental setup. The FEM was then used to determine other states of strains.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quantitative imaging methods to analyze cell migration assays are not standardized. Here we present a suite of two–dimensional barrier assays describing the collective spreading of an initially–confined population of 3T3 fibroblast cells. To quantify the motility rate we apply two different automatic image detection methods to locate the position of the leading edge of the spreading population after 24, 48 and 72 hours. These results are compared with a manual edge detection method where we systematically vary the detection threshold. Our results indicate that the observed spreading rates are very sensitive to the choice of image analysis tools and we show that a standard measure of cell migration can vary by as much as 25% for the same experimental images depending on the details of the image analysis tools. Our results imply that it is very difficult, if not impossible, to meaningfully compare previously published measures of cell migration since previous results have been obtained using different image analysis techniques and the details of these techniques are not always reported. Using a mathematical model, we provide a physical interpretation of our edge detection results. The physical interpretation is important since edge detection algorithms alone do not specify any physical measure, or physical definition, of the leading edge of the spreading population. Our modeling indicates that variations in the image threshold parameter correspond to a consistent variation in the local cell density. This means that varying the threshold parameter is equivalent to varying the location of the leading edge in the range of approximately 1–5% of the maximum cell density.