10 resultados para Regular operators, basic elementary operators, Banach lattices
em Aston University Research Archive
Resumo:
Ernst Mach observed that light or dark bands could be seen at abrupt changes of luminance gradient in the absence of peaks or troughs in luminance. Many models of feature detection share the idea that bars, lines, and Mach bands are found at peaks and troughs in the output of even-symmetric spatial filters. Our experiments assessed the appearance of Mach bands (position and width) and the probability of seeing them on a novel set of generalized Gaussian edges. Mach band probability was mainly determined by the shape of the luminance profile and increased with the sharpness of its corners, controlled by a single parameter (n). Doubling or halving the size of the images had no significant effect. Variations in contrast (20%-80%) and duration (50-300 ms) had relatively minor effects. These results rule out the idea that Mach bands depend simply on the amplitude of the second derivative, but a multiscale model, based on Gaussian-smoothed first- and second-derivative filtering, can account accurately for the probability and perceived spatial layout of the bands. A key idea is that Mach band visibility depends on the ratio of second- to first-derivative responses at peaks in the second-derivative scale-space map. This ratio is approximately scale-invariant and increases with the sharpness of the corners of the luminance ramp, as observed. The edges of Mach bands pose a surprisingly difficult challenge for models of edge detection, but a nonlinear third-derivative operation is shown to predict the locations of Mach band edges strikingly well. Mach bands thus shed new light on the role of multiscale filtering systems in feature coding. © 2012 ARVO.
Resumo:
Kozlov & Maz'ya (1989, Algebra Anal., 1, 144–170) proposed an alternating iterative method for solving Cauchy problems for general strongly elliptic and formally self-adjoint systems. However, in many applied problems, operators appear that do not satisfy these requirements, e.g. Helmholtz-type operators. Therefore, in this study, an alternating procedure for solving Cauchy problems for self-adjoint non-coercive elliptic operators of second order is presented. A convergence proof of this procedure is given.
Resumo:
This PhD thesis analyses networks of knowledge flows, focusing on the role of indirect ties in the knowledge transfer, knowledge accumulation and knowledge creation process. It extends and improves existing methods for mapping networks of knowledge flows in two different applications and contributes to two stream of research. To support the underlying idea of this thesis, which is finding an alternative method to rank indirect network ties to shed a new light on the dynamics of knowledge transfer, we apply Ordered Weighted Averaging (OWA) to two different network contexts. Knowledge flows in patent citation networks and a company supply chain network are analysed using Social Network Analysis (SNA) and the OWA operator. The OWA is used here for the first time (i) to rank indirect citations in patent networks, providing new insight into their role in transferring knowledge among network nodes; and to analyse a long chain of patent generations along 13 years; (ii) to rank indirect relations in a company supply chain network, to shed light on the role of indirectly connected individuals involved in the knowledge transfer and creation processes and to contribute to the literature on knowledge management in a supply chain. In doing so, indirect ties are measured and their role as means of knowledge transfer is shown. Thus, this thesis represents a first attempt to bridge the OWA and SNA fields and to show that the two methods can be used together to enrich the understanding of the role of indirectly connected nodes in a network. More specifically, the OWA scores enrich our understanding of knowledge evolution over time within complex networks. Future research can show the usefulness of OWA operator in different complex networks, such as the on-line social networks that consists of thousand of nodes.
Resumo:
This study surveys the ordered weighted averaging (OWA) operator literature using a citation network analysis. The main goals are the historical reconstruction of scientific development of the OWA field, the identification of the dominant direction of knowledge accumulation that emerged since the publication of the first OWA paper, and to discover the most active lines of research. The results suggest, as expected, that Yager's paper (IEEE Trans. Systems Man Cybernet, 18(1), 183-190, 1988) is the most influential paper and the starting point of all other research using OWA. Starting from his contribution, other lines of research developed and we describe them.
Resumo:
Marr's work offered guidelines on how to investigate vision (the theory - algorithm - implementation distinction), as well as specific proposals on how vision is done. Many of the latter have inevitably been superseded, but the approach was inspirational and remains so. Marr saw the computational study of vision as tightly linked to psychophysics and neurophysiology, but the last twenty years have seen some weakening of that integration. Because feature detection is a key stage in early human vision, we have returned to basic questions about representation of edges at coarse and fine scales. We describe an explicit model in the spirit of the primal sketch, but tightly constrained by psychophysical data. Results from two tasks (location-marking and blur-matching) point strongly to the central role played by second-derivative operators, as proposed by Marr and Hildreth. Edge location and blur are evaluated by finding the location and scale of the Gaussian-derivative `template' that best matches the second-derivative profile (`signature') of the edge. The system is scale-invariant, and accurately predicts blur-matching data for a wide variety of 1-D and 2-D images. By finding the best-fitting scale, it implements a form of local scale selection and circumvents the knotty problem of integrating filter outputs across scales. [Supported by BBSRC and the Wellcome Trust]
Resumo:
The kinematic mapping of a rigid open-link manipulator is a homomorphism between Lie groups. The homomorphisrn has solution groups that act on an inverse kinematic solution element. A canonical representation of solution group operators that act on a solution element of three and seven degree-of-freedom (do!) dextrous manipulators is determined by geometric analysis. Seven canonical solution groups are determined for the seven do! Robotics Research K-1207 and Hollerbach arms. The solution element of a dextrous manipulator is a collection of trivial fibre bundles with solution fibres homotopic to the Torus. If fibre solutions are parameterised by a scalar, a direct inverse funct.ion that maps the scalar and Cartesian base space coordinates to solution element fibre coordinates may be defined. A direct inverse pararneterisation of a solution element may be approximated by a local linear map generated by an inverse augmented Jacobian correction of a linear interpolation. The action of canonical solution group operators on a local linear approximation of the solution element of inverse kinematics of dextrous manipulators generates cyclical solutions. The solution representation is proposed as a model of inverse kinematic transformations in primate nervous systems. Simultaneous calibration of a composition of stereo-camera and manipulator kinematic models is under-determined by equi-output parameter groups in the composition of stereo-camera and Denavit Hartenberg (DH) rnodels. An error measure for simultaneous calibration of a composition of models is derived and parameter subsets with no equi-output groups are determined by numerical experiments to simultaneously calibrate the composition of homogeneous or pan-tilt stereo-camera with DH models. For acceleration of exact Newton second-order re-calibration of DH parameters after a sequential calibration of stereo-camera and DH parameters, an optimal numerical evaluation of DH matrix first order and second order error derivatives with respect to a re-calibration error function is derived, implemented and tested. A distributed object environment for point and click image-based tele-command of manipulators and stereo-cameras is specified and implemented that supports rapid prototyping of numerical experiments in distributed system control. The environment is validated by a hierarchical k-fold cross validated calibration to Cartesian space of a radial basis function regression correction of an affine stereo model. Basic design and performance requirements are defined for scalable virtual micro-kernels that broker inter-Java-virtual-machine remote method invocations between components of secure manageable fault-tolerant open distributed agile Total Quality Managed ISO 9000+ conformant Just in Time manufacturing systems.
Resumo:
Operators can become confused while diagnosing faults in process plant while in operation. This may prevent remedial actions being taken before hazardous consequences can occur. The work in this thesis proposes a method to aid plant operators in systematically finding the causes of any fault in the process plant. A computer aided fault diagnosis package has been developed for use on the widely available IBM PC compatible microcomputer. The program displays a coloured diagram of a fault tree on the VDU of the microcomputer, so that the operator can see the link between the fault and its causes. The consequences of the fault and the causes of the fault are also shown to provide a warning of what may happen if the fault is not remedied. The cause and effect data needed by the package are obtained from a hazard and operability (HAZOP) study on the process plant. The result of the HAZOP study is recorded as cause and symptom equations which are translated into a data structure and stored in the computer as a file for the package to access. Probability values are assigned to the events that constitute the basic causes of any deviation. From these probability values, the a priori probabilities of occurrence of other events are evaluated. A top-down recursive algorithm, called TDRA, for evaluating the probability of every event in a fault tree has been developed. From the a priori probabilities, the conditional probabilities of the causes of the fault are then evaluated using Bayes' conditional probability theorem. The posteriori probability values could then be used by the operators to check in an orderly manner the cause of the fault. The package has been tested using the results of a HAZOP study on a pilot distillation plant. The results from the test show how easy it is to trace the chain of events that leads to the primary cause of a fault. This method could be applied in a real process environment.
Resumo:
Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. Relationship between the free energy in statistical-mechanics approach and the mutual information used in the information-theory literature is established within a general framework; Gallager and MacKay-Neal codes are studied as specific examples of LDPC codes. It is shown that basic properties of these codes known for particular channels, including their potential to saturate Shannon's bound, hold for general symmetric channels. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel models.
Resumo:
Simple features such as edges are the building blocks of spatial vision, and so I ask: how arevisual features and their properties (location, blur and contrast) derived from the responses ofspatial filters in early vision; how are these elementary visual signals combined across the twoeyes; and when are they not combined? Our psychophysical evidence from blur-matchingexperiments strongly supports a model in which edges are found at the spatial peaks ofresponse of odd-symmetric receptive fields (gradient operators), and their blur B is givenby the spatial scale of the most active operator. This model can explain some surprisingaspects of blur perception: edges look sharper when they are low contrast, and when theirlength is made shorter. Our experiments on binocular fusion of blurred edges show that singlevision is maintained for disparities up to about 2.5*B, followed by diplopia or suppression ofone edge at larger disparities. Edges of opposite polarity never fuse. Fusion may be served bybinocular combination of monocular gradient operators, but that combination - involvingbinocular summation and interocular suppression - is not completely understood.In particular, linear summation (supported by psychophysical and physiological evidence)predicts that fused edges should look more blurred with increasing disparity (up to 2.5*B),but results surprisingly show that edge blur appears constant across all disparities, whetherfused or diplopic. Finally, when edges of very different blur are shown to the left and righteyes fusion may not occur, but perceived blur is not simply given by the sharper edge, nor bythe higher contrast. Instead, it is the ratio of contrast to blur that matters: the edge with theAbstracts 1237steeper gradient dominates perception. The early stages of binocular spatial vision speak thelanguage of luminance gradients.