20 resultados para Computer Vision for Robotics and Automation
em Aston University Research Archive
Resumo:
There have been two main approaches to feature detection in human and computer vision - luminance-based and energy-based. Bars and edges might arise from peaks of luminance and luminance gradient respectively, or bars and edges might be found at peaks of local energy, where local phases are aligned across spatial frequency. This basic issue of definition is important because it guides more detailed models and interpretations of early vision. Which approach better describes the perceived positions of elements in a 3-element contour-alignment task? We used the class of 1-D images defined by Morrone and Burr in which the amplitude spectrum is that of a (partially blurred) square wave and Fourier components in a given image have a common phase. Observers judged whether the centre element (eg ±458 phase) was to the left or right of the flanking pair (eg 0º phase). Lateral offset of the centre element was varied to find the point of subjective alignment from the fitted psychometric function. This point shifted systematically to the left or right according to the sign of the centre phase, increasing with the degree of blur. These shifts were well predicted by the location of luminance peaks and other derivative-based features, but not by energy peaks which (by design) predicted no shift at all. These results on contour alignment agree well with earlier ones from a more explicit feature-marking task, and strongly suggest that human vision does not use local energy peaks to locate basic first-order features. [Supported by the Wellcome Trust (ref: 056093)]
Resumo:
In response to the increasing international competitiveness, many manufacturing businesses are rethinking their management strategies and philosophies towards achieving a computer integrated environment. The explosive growth in Advanced Manufacturing Technology (AMI) has resulted in the formation of functional "Islands of Automation" such as Computer Aided Design (CAD), Computer Aided Manufacturing (CAM), Computer Aided Process Planning (CAPP) and Manufacturing Resources Planning (MRPII). This has resulted in an environment which has focussed areas of excellence and poor overall efficiency, co-ordination and control. The main role of Computer Integrated Manufacturing (CIM) is to integrate these islands of automation and develop a totally integrated and controlled environment. However, the various perceptions of CIM, although developing, remain focussed on a very narrow integration scope and have consequently resulted in mere linked islands of automation with little improvement in overall co-ordination and control. This thesis, that is the research described within, develops and examines a more holistic view of CIM, which is based on the integration of various business elements. One particular business element, namely control, has been shown to have a multi-facetted and underpinning relationship with the CIM philosophy. This relationship impacts various CIM system design aspects including the CIM business analysis and modelling technique, the specification of systems integration requirements, the CIM system architectural form and the degree of business redesign. The research findings show that fundamental changes to CIM system design are required; these are incorporated in a generic CIM design methodology. The affect and influence of this holistic view of CIM on a manufacturing business has been evaluated through various industrial case study applications. Based on the evidence obtained, it has been concluded that this holistic, control based approach to CIM can provide a greatly improved means of achieving a totally integrated and controlled business environment. This generic CIM methodology will therefore make a significant contribution to the planning, modelling, design and development of future CIM systems.
Resumo:
This thesis presents a study of how edges are detected and encoded by the human visual system. The study begins with theoretical work on the development of a model of edge processing, and includes psychophysical experiments on humans, and computer simulations of these experiments, using the model. The first chapter reviews the literature on edge processing in biological and machine vision, and introduces the mathematical foundations of this area of research. The second chapter gives a formal presentation of a model of edge perception that detects edges and characterizes their blur, contrast and orientation, using Gaussian derivative templates. This model has previously been shown to accurately predict human performance in blur matching tasks with several different types of edge profile. The model provides veridical estimates of the blur and contrast of edges that have a Gaussian integral profile. Since blur and contrast are independent parameters of Gaussian edges, the model predicts that varying one parameter should not affect perception of the other. Psychophysical experiments showed that this prediction is incorrect: reducing the contrast makes an edge look sharper; increasing the blur reduces the perceived contrast. Both of these effects can be explained by introducing a smoothed threshold to one of the processing stages of the model. It is shown that, with this modification,the model can predict the perceived contrast and blur of a number of edge profiles that differ markedly from the ideal Gaussian edge profiles on which the templates are based. With only a few exceptions, the results from all the experiments on blur and contrast perception can be explained reasonably well using one set of parameters for each subject. In the few cases where the model fails, possible extensions to the model are discussed.
Resumo:
The work presented in this thesis falls into three main categories: The design and synthesis of potential anti-tuberculosis drugs targeting a mycobacterial esterase and the enzyme dUTPase; synthesis and anti-microbial SAR studies on a set of carboxamidrazones; synthesis and anti-microbial SAR studies on a set of thiosem icarbazones.
Resumo:
In this paper we propose an approach based on self-interested autonomous cameras, which exchange responsibility for tracking objects in a market mechanism, in order to maximise their own utility. A novel ant-colony inspired mechanism is used to grow the vision graph during runtime, which may then be used to optimise communication between cameras. The key benefits of our completely decentralised approach are on the one hand generating the vision graph online which permits the addition and removal cameras to the network during runtime and on the other hand relying only on local information, increasing the robustness of the system. Since our market-based approach does not rely on a priori topology information, the need for any multi-camera calibration can be avoided. © 2011 IEEE.
Resumo:
In this article we present an approach to object tracking handover in a network of smart cameras, based on self-interested autonomous agents, which exchange responsibility for tracking objects in a market mechanism, in order to maximise their own utility. A novel ant-colony inspired mechanism is used to learn the vision graph, that is, the camera neighbourhood relations, during runtime, which may then be used to optimise communication between cameras. The key benefits of our completely decentralised approach are on the one hand generating the vision graph online, enabling efficient deployment in unknown scenarios and camera network topologies, and on the other hand relying only on local information, increasing the robustness of the system. Since our market-based approach does not rely on a priori topology information, the need for any multicamera calibration can be avoided. We have evaluated our approach both in a simulation study and in network of real distributed smart cameras.
Resumo:
Once the factory worker was considered to be a necessary evil, soon to be replaced by robotics and automation. Today, many manufacturers appreciate that people in direct productive roles can provide important flexibility and responsiveness, and so significantly contribute to business success. The challenge is no longer to design people out of the factory, but to design factory environment that help to get the best performance from people. This paper describes research that has set out to help to achieve this by expanding the capabilities of simulation modeling tools currently used by practitioners.
Resumo:
In the study of complex networks, vertex centrality measures are used to identify the most important vertices within a graph. A related problem is that of measuring the centrality of an edge. In this paper, we propose a novel edge centrality index rooted in quantum information. More specifically, we measure the importance of an edge in terms of the contribution that it gives to the Von Neumann entropy of the graph. We show that this can be computed in terms of the Holevo quantity, a well known quantum information theoretical measure. While computing the Von Neumann entropy and hence the Holevo quantity requires computing the spectrum of the graph Laplacian, we show how to obtain a simplified measure through a quadratic approximation of the Shannon entropy. This in turns shows that the proposed centrality measure is strongly correlated with the negative degree centrality on the line graph. We evaluate our centrality measure through an extensive set of experiments on real-world as well as synthetic networks, and we compare it against commonly used alternative measures.
Resumo:
Laplacian-based descriptors, such as the Heat Kernel Signature and the Wave Kernel Signature, allow one to embed the vertices of a graph onto a vectorial space, and have been successfully used to find the optimal matching between a pair of input graphs. While the HKS uses a heat di↵usion process to probe the local structure of a graph, the WKS attempts to do the same through wave propagation. In this paper, we propose an alternative structural descriptor that is based on continuoustime quantum walks. More specifically, we characterise the structure of a graph using its average mixing matrix. The average mixing matrix is a doubly-stochastic matrix that encodes the time-averaged behaviour of a continuous-time quantum walk on the graph. We propose to use the rows of the average mixing matrix for increasing stopping times to develop a novel signature, the Average Mixing Matrix Signature (AMMS). We perform an extensive range of experiments and we show that the proposed signature is robust under structural perturbations of the original graphs and it outperforms both the HKS and WKS when used as a node descriptor in a graph matching task.
Resumo:
There have been two main approaches to feature detection in human and computer vision - based either on the luminance distribution and its spatial derivatives, or on the spatial distribution of local contrast energy. Thus, bars and edges might arise from peaks of luminance and luminance gradient respectively, or bars and edges might be found at peaks of local energy, where local phases are aligned across spatial frequency. This basic issue of definition is important because it guides more detailed models and interpretations of early vision. Which approach better describes the perceived positions of features in images? We used the class of 1-D images defined by Morrone and Burr in which the amplitude spectrum is that of a (partially blurred) square-wave and all Fourier components have a common phase. Observers used a cursor to mark where bars and edges were seen for different test phases (Experiment 1) or judged the spatial alignment of contours that had different phases (e.g. 0 degrees and 45 degrees ; Experiment 2). The feature positions defined by both tasks shifted systematically to the left or right according to the sign of the phase offset, increasing with the degree of blur. These shifts were well predicted by the location of luminance peaks (bars) and gradient peaks (edges), but not by energy peaks which (by design) predicted no shift at all. These results encourage models based on a Gaussian-derivative framework, but do not support the idea that human vision uses points of phase alignment to find local, first-order features. Nevertheless, we argue that both approaches are presently incomplete and a better understanding of early vision may combine insights from both. (C)2004 Elsevier Ltd. All rights reserved.
Resumo:
PURPOSE: To design and validate a vision-specific quality-of-life assessment tool to be used in a clinical setting to evaluate low-vision rehabilitation strategy and management. METHODS: Previous vision-related questionnaires were assessed by low-vision rehabilitation professionals and patients for relevance and coverage. The 74 items selected were pretested to ensure correct interpretation. One hundred and fifty patients with low vision completed the chosen questions on four occasions to allow the selection of the most appropriate items. The vision-specific quality of life of patients with low vision was compared with that of 70 age-matched and gender-matched patients with normal vision and before and after low-vision rehabilitation in 278 patients. RESULTS: Items that were unreliable, internally inconsistent, redundant, or not relevant were excluded, resulting in the 25-item Low Vision Quality-of-Life Questionnaire (LVQOL). Completion of the LVQOL results in a summed score between 0 (a low quality of life) and 125 (a high quality of life). The LVQOL has a high internal consistency (α = 0.88) and good reliability (0.72). The average LVQOL score for a population with low vision (60.9 ± 25.1) was significantly lower than the average score of those with normal vision (100.3 ± 20.8). Rehabilitation improved the LVQOL score of those with low vision by an average of 6.8 ± 15.6 (17%). CONCLUSIONS: The LVQOL was shown to be an internally consistent, reliable, and fast method for measuring the vision-specific quality of life of the visually impaired in a clinical setting. It is able to quantify the quality of life of those with low vision and is useful in determining the effects of low-vision rehabilitation. Copyright (C) 2000 Elsevier Science Inc.
Resumo:
The aim of this research was to investigate the integration of computer-aided drafting and finite-element analysis in a linked computer-aided design procedure and to develop the necessary software. The Be'zier surface patch for surface representation was used to bridge the gap between the rather separate fields of drafting and finite-element analysis because the surfaces are defined by analytical functions which allow systematic and controlled variation of the shape and provide continuous derivatives up to any required degree. The objectives of this research were achieved by establishing : (i) A package which interpretes the engineering drawings of plate and shell structures and prepares the Be'zier net necessary for surface representation. (ii) A general purpose stand-alone meshed-surface modelling package for surface representation of plates and shells using the Be'zier surface patch technique. (iii) A translator which adapts the geometric description of plate and shell structures as given by the meshed-surface modeller to the form needed by the finite-element analysis package. The translator was extended to suit fan impellers by taking advantage of their sectorial symmetry. The linking processes were carried out for simple test structures, simplified and actual fan impellers to verify the flexibility and usefulness of the linking technique adopted. Finite-element results for thin plate and shell structures showed excellent agreement with those obtained by other investigators while results for the simplified and actual fan impellers also showed good agreement with those obtained in an earlier investigation where finite-element analysis input data were manually prepared. Some extensions of this work have also been discussed.
Resumo:
This study considers the application of image analysis in petrography and investigates the possibilities for advancing existing techniques by introducing feature extraction and analysis capabilities of a higher level than those currently employed. The aim is to construct relevant, useful descriptions of crystal form and inter-crystal relations in polycrystalline igneous rock sections. Such descriptions cannot be derived until the `ownership' of boundaries between adjacent crystals has been established: this is the fundamental problem of crystal boundary assignment. An analysis of this problem establishes key image features which reveal boundary ownership; a set of explicit analysis rules is presented. A petrographic image analysis scheme based on these principles is outlined and the implementation of key components of the scheme considered. An algorithm for the extraction and symbolic representation of image structural information is developed. A new multiscale analysis algorithm which produces a hierarchical description of the linear and near-linear structure on a contour is presented in detail. Novel techniques for symmetry analysis are developed. The analyses considered contribute both to the solution of the boundary assignment problem and to the construction of geologically useful descriptions of crystal form. The analysis scheme which is developed employs grouping principles such as collinearity, parallelism, symmetry and continuity, so providing a link between this study and more general work in perceptual grouping and intermediate level computer vision. Consequently, the techniques developed in this study may be expected to find wider application beyond the petrographic domain.