1000 resultados para Spatial filtering
Resumo:
At present the prediction and characterization of the emission output of a diffusive random laser remains a challenge, despite the variety of investigated materials and theoretical interpretations given up to now. Here, a new mode selection method, based on spatial filtering and ultrafast detection, which allows to separate individual lasing modes and follow their temporal evolution is presented. In particular, the work explores the random laser behavior of a ground powder of an organic-inorganic hybrid compound based on Rhodamine B incorporated into a di-ureasil host. The experimental approach gives direct access to the mode structure and dynamics, shows clear modal relaxation oscillations, and illustrates the lasing modes stochastic behavior of this diffusive scattering system. The effect of the excitation energy on its modal density is also investigated. Finally, imaging measurements reveal the dominant role of diffusion over amplification processes in this kind of unconventional lasers. (C) 2015 Optical Society of America
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
A method to reduce the noise power in far-field pattern without modifying the desired signal is proposed. Therefore, an important signal-to-noise ratio improvement may be achieved. The method is used when the antenna measurement is performed in planar near-field, where the recorded data are assumed to be corrupted with white Gaussian and space-stationary noise, because of the receiver additive noise. Back-propagating the measured field from the scan plane to the antenna under test (AUT) plane, the noise remains white Gaussian and space-stationary, whereas the desired field is theoretically concentrated in the aperture antenna. Thanks to this fact, a spatial filtering may be applied, cancelling the field which is located out of the AUT dimensions and which is only composed by noise. Next, a planar field to far-field transformation is carried out, achieving a great improvement compared to the pattern obtained directly from the measurement. To verify the effectiveness of the method, two examples will be presented using both simulated and measured near-field data.
Resumo:
This paper describes two methods to cancel the effect of two kinds of leakage signals which may be presented when an antenna is measured in a planar near-field range. One method tries to reduce leakage bias errors from the receiver¿s quadrature detector and it is based on estimating the bias constant added to every near-field data sample. Then, that constant is subtracted from the data, removing its undesired effect on the far-field pattern. The estimation is performed by back-propagating the field from the scan plane to the antenna under test plane (AUT) and averaging all the data located outside the AUT aperture. The second method is able to cancel the effect of the leakage from faulty transmission lines, connectors or rotary joints. The basis of this method is also a reconstruction process to determine the field distribution on the AUT plane. Once this distribution is known, a spatial filtering is applied to cancel the contribution due to those faulty elements. After that, a near-field-to-far-field transformation is applied, obtaining a new radiation pattern where the leakage effects have disappeared. To verify the effectiveness of both methods, several examples are presented.
Resumo:
One important task in the design of an antenna is to carry out an analysis to find out the characteristics of the antenna that best fulfills the specifications fixed by the application. After that, a prototype is manufactured and the next stage in design process is to check if the radiation pattern differs from the designed one. Besides the radiation pattern, other radiation parameters like directivity, gain, impedance, beamwidth, efficiency, polarization, etc. must be also evaluated. For this purpose, accurate antenna measurement techniques are needed in order to know exactly the actual electromagnetic behavior of the antenna under test. Due to this fact, most of the measurements are performed in anechoic chambers, which are closed areas, normally shielded, covered by electromagnetic absorbing material, that simulate free space propagation conditions, due to the absorption of the radiation absorbing material. Moreover, these facilities can be employed independently of the weather conditions and allow measurements free from interferences. Despite all the advantages of the anechoic chambers, the results obtained both from far-field measurements and near-field measurements are inevitably affected by errors. Thus, the main objective of this Thesis is to propose algorithms to improve the quality of the results obtained in antenna measurements by using post-processing techniques and without requiring additional measurements. First, a deep revision work of the state of the art has been made in order to give a general vision of the possibilities to characterize or to reduce the effects of errors in antenna measurements. Later, new methods to reduce the unwanted effects of four of the most commons errors in antenna measurements are described and theoretical and numerically validated. The basis of all them is the same, to perform a transformation from the measurement surface to another domain where there is enough information to easily remove the contribution of the errors. The four errors analyzed are noise, reflections, truncation errors and leakage and the tools used to suppress them are mainly source reconstruction techniques, spatial and modal filtering and iterative algorithms to extrapolate functions. Therefore, the main idea of all the methods is to modify the classical near-field-to-far-field transformations by including additional steps with which errors can be greatly suppressed. Moreover, the proposed methods are not computationally complex and, because they are applied in post-processing, additional measurements are not required. The noise is the most widely studied error in this Thesis, proposing a total of three alternatives to filter out an important noise contribution before obtaining the far-field pattern. The first one is based on a modal filtering. The second alternative uses a source reconstruction technique to obtain the extreme near-field where it is possible to apply a spatial filtering. The last one is to back-propagate the measured field to a surface with the same geometry than the measurement surface but closer to the AUT and then to apply also a spatial filtering. All the alternatives are analyzed in the three most common near-field systems, including comprehensive noise statistical analyses in order to deduce the signal-to-noise ratio improvement achieved in each case. The method to suppress reflections in antenna measurements is also based on a source reconstruction technique and the main idea is to reconstruct the field over a surface larger than the antenna aperture in order to be able to identify and later suppress the virtual sources related to the reflective waves. The truncation error presents in the results obtained from planar, cylindrical and partial spherical near-field measurements is the third error analyzed in this Thesis. The method to reduce this error is based on an iterative algorithm to extrapolate the reliable region of the far-field pattern from the knowledge of the field distribution on the AUT plane. The proper termination point of this iterative algorithm as well as other critical aspects of the method are also studied. The last part of this work is dedicated to the detection and suppression of the two most common leakage sources in antenna measurements. A first method tries to estimate the leakage bias constant added by the receiver’s quadrature detector to every near-field data and then suppress its effect on the far-field pattern. The second method can be divided into two parts; the first one to find the position of the faulty component that radiates or receives unwanted radiation, making easier its identification within the measurement environment and its later substitution; and the second part of this method is able to computationally remove the leakage effect without requiring the substitution of the faulty component. Resumen Una tarea importante en el diseño de una antena es llevar a cabo un análisis para averiguar las características de la antena que mejor cumple las especificaciones fijadas por la aplicación. Después de esto, se fabrica un prototipo de la antena y el siguiente paso en el proceso de diseño es comprobar si el patrón de radiación difiere del diseñado. Además del patrón de radiación, otros parámetros de radiación como la directividad, la ganancia, impedancia, ancho de haz, eficiencia, polarización, etc. deben ser también evaluados. Para lograr este propósito, se necesitan técnicas de medida de antenas muy precisas con el fin de saber exactamente el comportamiento electromagnético real de la antena bajo prueba. Debido a esto, la mayoría de las medidas se realizan en cámaras anecoicas, que son áreas cerradas, normalmente revestidas, cubiertas con material absorbente electromagnético. Además, estas instalaciones se pueden emplear independientemente de las condiciones climatológicas y permiten realizar medidas libres de interferencias. A pesar de todas las ventajas de las cámaras anecoicas, los resultados obtenidos tanto en medidas en campo lejano como en medidas en campo próximo están inevitablemente afectados por errores. Así, el principal objetivo de esta Tesis es proponer algoritmos para mejorar la calidad de los resultados obtenidos en medida de antenas mediante el uso de técnicas de post-procesado. Primeramente, se ha realizado un profundo trabajo de revisión del estado del arte con el fin de dar una visión general de las posibilidades para caracterizar o reducir los efectos de errores en medida de antenas. Después, se han descrito y validado tanto teórica como numéricamente nuevos métodos para reducir el efecto indeseado de cuatro de los errores más comunes en medida de antenas. La base de todos ellos es la misma, realizar una transformación de la superficie de medida a otro dominio donde hay suficiente información para eliminar fácilmente la contribución de los errores. Los cuatro errores analizados son ruido, reflexiones, errores de truncamiento y leakage y las herramientas usadas para suprimirlos son principalmente técnicas de reconstrucción de fuentes, filtrado espacial y modal y algoritmos iterativos para extrapolar funciones. Por lo tanto, la principal idea de todos los métodos es modificar las transformaciones clásicas de campo cercano a campo lejano incluyendo pasos adicionales con los que los errores pueden ser enormemente suprimidos. Además, los métodos propuestos no son computacionalmente complejos y dado que se aplican en post-procesado, no se necesitan medidas adicionales. El ruido es el error más ampliamente estudiado en esta Tesis, proponiéndose un total de tres alternativas para filtrar una importante contribución de ruido antes de obtener el patrón de campo lejano. La primera está basada en un filtrado modal. La segunda alternativa usa una técnica de reconstrucción de fuentes para obtener el campo sobre el plano de la antena donde es posible aplicar un filtrado espacial. La última es propagar el campo medido a una superficie con la misma geometría que la superficie de medida pero más próxima a la antena y luego aplicar también un filtrado espacial. Todas las alternativas han sido analizadas en los sistemas de campo próximos más comunes, incluyendo detallados análisis estadísticos del ruido con el fin de deducir la mejora de la relación señal a ruido lograda en cada caso. El método para suprimir reflexiones en medida de antenas está también basado en una técnica de reconstrucción de fuentes y la principal idea es reconstruir el campo sobre una superficie mayor que la apertura de la antena con el fin de ser capaces de identificar y después suprimir fuentes virtuales relacionadas con las ondas reflejadas. El error de truncamiento que aparece en los resultados obtenidos a partir de medidas en un plano, cilindro o en la porción de una esfera es el tercer error analizado en esta Tesis. El método para reducir este error está basado en un algoritmo iterativo para extrapolar la región fiable del patrón de campo lejano a partir de información de la distribución del campo sobre el plano de la antena. Además, se ha estudiado el punto apropiado de terminación de este algoritmo iterativo así como otros aspectos críticos del método. La última parte de este trabajo está dedicado a la detección y supresión de dos de las fuentes de leakage más comunes en medida de antenas. El primer método intenta realizar una estimación de la constante de fuga del leakage añadido por el detector en cuadratura del receptor a todos los datos en campo próximo y después suprimir su efecto en el patrón de campo lejano. El segundo método se puede dividir en dos partes; la primera de ellas para encontrar la posición de elementos defectuosos que radian o reciben radiación indeseada, haciendo más fácil su identificación dentro del entorno de medida y su posterior substitución. La segunda parte del método es capaz de eliminar computacionalmente el efector del leakage sin necesidad de la substitución del elemento defectuoso.
Resumo:
Two different methods to reduce the noise power in the far-field pattern of an antenna as measured in cylindrical near-field (CNF) are proposed. Both methods are based on the same principle: the data recorded in the CNF measurement, assumed to be corrupted by white Gaussian and space-stationary noise, are transformed into a new domain where it is possible to filter out a portion of noise. Those filtered data are then used to calculate a far-field pattern with less noise power than that one obtained from the measured data without applying any filtering. Statistical analyses are carried out to deduce the expressions of the signal-to-noise ratio improvement achieved with each method. Although the idea of the two alternatives is the same, there are important differences between them. The first one applies a modal filtering, requires an oversampling and improves the far-field pattern in all directions. The second method employs a spatial filtering on the antenna plane, does not require oversampling and the far-field pattern is only improved in the forward hemisphere. Several examples are presented using both simulated and measured near-field data to verify the effectiveness of the methods.
Resumo:
Three different methods to reduce the noise power in the far-field pattern of an antenna when it is measured in a cylindrical near field system are presented and compared. The first one is based on a modal filtering while the other two are based on spatial filtering, either on an antenna plane or either on a cylinder of smaller radius. Simulated and measured results will be presented.
Resumo:
La relación entre la ingeniería y la medicina cada vez se está haciendo más estrecha, y debido a esto se ha creado una nueva disciplina, la bioingeniería, ámbito en el que se centra el proyecto. Este ámbito cobra gran interés debido al rápido desarrollo de nuevas tecnologías que en particular permiten, facilitan y mejoran la obtención de diagnósticos médicos respecto de los métodos tradicionales. Dentro de la bioingeniería, el campo que está teniendo mayor desarrollo es el de la imagen médica, gracias al cual se pueden obtener imágenes del interior del cuerpo humano con métodos no invasivos y sin necesidad de recurrir a la cirugía. Mediante métodos como la resonancia magnética, rayos X, medicina nuclear o ultrasonidos, se pueden obtener imágenes del cuerpo humano para realizar diagnósticos. Para que esas imágenes puedan ser utilizadas con ese fin hay que realizar un correcto tratamiento de éstas mediante técnicas de procesado digital. En ése ámbito del procesado digital de las imágenes médicas es en el que se ha realizado este proyecto. Gracias al desarrollo del tratamiento digital de imágenes con métodos de extracción de información, mejora de la visualización o resaltado de rasgos de interés de las imágenes, se puede facilitar y mejorar el diagnóstico de los especialistas. Por todo esto en una época en la que se quieren automatizar todos los procesos para mejorar la eficacia del trabajo realizado, el automatizar el procesado de las imágenes para extraer información con mayor facilidad, es muy útil. Actualmente una de las herramientas más potentes en el tratamiento de imágenes médicas es Matlab, gracias a su toolbox de procesado de imágenes. Por ello se eligió este software para el desarrollo de la parte práctica de este proyecto, su potencia y versatilidad simplifican la implementación de algoritmos. Este proyecto se estructura en dos partes. En la primera se realiza una descripción general de las diferentes modalidades de obtención de imágenes médicas y se explican los diferentes usos de cada método, dependiendo del campo de aplicación. Posteriormente se hace una descripción de las técnicas más importantes de procesado de imagen digital que han sido utilizadas en el proyecto. En la segunda parte se desarrollan cuatro aplicaciones en Matlab para ejemplificar el desarrollo de algoritmos de procesado de imágenes médicas. Dichas implementaciones demuestran la aplicación y utilidad de los conceptos explicados anteriormente en la parte teórica, como la segmentación y operaciones de filtrado espacial de la imagen, así como otros conceptos específicos. Las aplicaciones ejemplo desarrolladas han sido: obtención del porcentaje de metástasis de un tejido, diagnóstico de las deformidades de la columna vertebral, obtención de la MTF de una cámara de rayos gamma y medida del área de un fibroadenoma de una ecografía de mama. Por último, para cada una de las aplicaciones se detallará su utilidad en el campo de la imagen médica, los resultados obtenidos y su implementación en una interfaz gráfica para facilitar su uso. ABSTRACT. The relationship between medicine and engineering is becoming closer than ever giving birth to a recently appeared science field: bioengineering. This project is focused on this subject. This recent field is becoming more and more important due to the fast development of new technologies that provide tools to improve disease diagnosis, with regard to traditional procedures. In bioengineering the fastest growing field is medical imaging, in which we can obtain images of the inside of the human body without need of surgery. Nowadays by means of the medical modalities of magnetic resonance, X ray, nuclear medicine or ultrasound, we can obtain images to make a more accurate diagnosis. For those images to be useful within the medical field, they should be processed properly with some digital image processing techniques. It is in this field of digital medical image processing where this project is developed. Thanks to the development of digital image processing providing methods for data collection, improved visualization or data highlighting, diagnosis can be eased and facilitated. In an age where automation of processes is much sought, automated digital image processing to ease data collection is extremely useful. One of the most powerful image processing tools is Matlab, together with its image processing toolbox. That is the reason why that software was chosen to develop the practical algorithms in this project. This final project is divided into two main parts. Firstly, the different modalities for obtaining medical images will be described. The different usages of each method according to the application will also be specified. Afterwards we will give a brief description of the most important image processing tools that have been used in the project. Secondly, four algorithms in Matlab are implemented, to provide practical examples of medical image processing algorithms. This implementation shows the usefulness of the concepts previously explained in the first part, such as: segmentation or spatial filtering. The particular applications examples that have been developed are: calculation of the metastasis percentage of a tissue, diagnosis of spinal deformity, approximation to the MTF of a gamma camera, and measurement of the area of a fibroadenoma in an ultrasound image. Finally, for each of the applications developed, we will detail its usefulness within the medical field, the results obtained, and its implementation in a graphical user interface to ensure ease of use.
Resumo:
En esta tesis se recoge el trabajo experimental realizado para la caracterización de las propiedades ópticas en diodos láser construidos con estructuras basadas en pozos cuánticos (QW) y puntos cuánticos (QD). Las propiedades que se han estudiado en estos dispositivos son los espectros de ganancia, ganancia diferencial, índice diferencial y factor de ensanchamiento de línea (LEF). La comprensión de estas propiedades es de especial importancia para el diseño de nuevos diodos láser destinados a ser utilizados en aplicaciones exigentes como son las comunicaciones ópticas o aplicaciones médicas. El estudio se ha llevado a cabo en muestras de diodos láser suministrados por diferentes fabricantes: Ferdinand Braun Institut für Hóchstfrequenztechnik, Thales Research and Technology y la Universidad de Würzburg. Debido a esto las muestras de los láseres se han suministrado en diferentes configuraciones, utilizándose tanto dispositivos con cavidades de área ancha como de tipo caballete (“ridge”). En los trabajos se ha realizado el diseño y la construcción de los montajes experimentales y la implementación de los métodos analíticos necesarios para el estudio de las diferentes muestras. En los montajes experimentales se han implementado procesos para el filtrado espacial de los modos laterales de la cavidad presentes en los láseres de área ancha. Los métodos analíticos implementados se han utilizado para reducir los errores existentes en los sistemas de medida, mejorar su precisión y para separar la variación de índice en debida al calentamiento y la variación de corriente cuando los láseres operan en continua. El estudio sistemático de las propiedades de ópticas de los diodos láser basados en QW y QD ha permitido concluir que propiedades como el factor de ensanchamiento de línea no tienen por qué ser necesariamente inferiores en estos últimos, ya que dependen de las condiciones de inyección. En los láseres de QW se ha observado experimentalmente una reducción del factor de ensanchamiento de línea al alcanzarse la segunda transición, debido al aumento de la ganancia diferencial. ABSTRACT This thesis includes the experimental work for the characterization of optical properties of quantum well (QW) and quantum dot (QD) laser diodes. The properties that have been studied in these devices are the gain, differential gain, differential index and the linewith enhancement factor (LEF). The understating of these properties is of special importance for the design of new laser diodes to be used in exigent applications such as optical communications or medical applications. The study has been carried out using laser samples supplied by different manufacturers: Ferdinand Braun Institut fu¨r H¨ochstfrequenztechnik, Thales Research and Technology and the University of Wu¨rzburg. Because of this the laser samples were supplied in various configurations, using both broad area and ridge devices. In the work it has been done the design and construction of the experimental setups and implementation of analytical methods required for the study of the different devices. In the experimental set-ups it has been implemented a spatial filtering process to remove the lateral modes present in broad area lasers. The implemented analytical methods has been used to reduce the experimental errors in the measurement systems, improve the accuracy and separate the index variation caused by heating and current variation when the lasers operated on continuous wave. The systematic study of the optical properties of the QW and QD laser diodes has allowed concluding that properties such the linewith enhancement factor does not have to be necessary to be lower in the QD devices, since it is dependent on the injections conditions. In this work it has been experimentally observed a reduction of the linewith enhancement factor where the second transition has been reached mainly due to in the increased of the differential gain which is observed in this situation.
Resumo:
To make vision possible, the visual nervous system must represent the most informative features in the light pattern captured by the eye. Here we use Gaussian scale-space theory to derive a multiscale model for edge analysis and we test it in perceptual experiments. At all scales there are two stages of spatial filtering. An odd-symmetric, Gaussian first derivative filter provides the input to a Gaussian second derivative filter. Crucially, the output at each stage is half-wave rectified before feeding forward to the next. This creates nonlinear channels selectively responsive to one edge polarity while suppressing spurious or "phantom" edges. The two stages have properties analogous to simple and complex cells in the visual cortex. Edges are found as peaks in a scale-space response map that is the output of the second stage. The position and scale of the peak response identify the location and blur of the edge. The model predicts remarkably accurately our results on human perception of edge location and blur for a wide range of luminance profiles, including the surprising finding that blurred edges look sharper when their length is made shorter. The model enhances our understanding of early vision by integrating computational, physiological, and psychophysical approaches. © ARVO.
Resumo:
The initial image-processing stages of visual cortex are well suited to a local (patchwise) analysis of the viewed scene. But the world's structures extend over space as textures and surfaces, suggesting the need for spatial integration. Most models of contrast vision fall shy of this process because (i) the weak area summation at detection threshold is attributed to probability summation (PS) and (ii) there is little or no advantage of area well above threshold. Both of these views are challenged here. First, it is shown that results at threshold are consistent with linear summation of contrast following retinal inhomogeneity, spatial filtering, nonlinear contrast transduction and multiple sources of additive Gaussian noise. We suggest that the suprathreshold loss of the area advantage in previous studies is due to a concomitant increase in suppression from the pedestal. To overcome this confound, a novel stimulus class is designed where: (i) the observer operates on a constant retinal area, (ii) the target area is controlled within this summation field, and (iii) the pedestal is fixed in size. Using this arrangement, substantial summation is found along the entire masking function, including the region of facilitation. Our analysis shows that PS and uncertainty cannot account for the results, and that suprathreshold summation of contrast extends over at least seven target cycles of grating. © 2007 The Royal Society.
Resumo:
Objective: This study aimed to explore methods of assessing interactions between neuronal sources using MEG beamformers. However, beamformer methodology is based on the assumption of no linear long-term source interdependencies [VanVeen BD, vanDrongelen W, Yuchtman M, Suzuki A. Localization of brain electrical activity via linearly constrained minimum variance spatial filtering. IEEE Trans Biomed Eng 1997;44:867-80; Robinson SE, Vrba J. Functional neuroimaging by synthetic aperture magnetometry (SAM). In: Recent advances in Biomagnetism. Sendai: Tohoku University Press; 1999. p. 302-5]. Although such long-term correlations are not efficient and should not be anticipated in a healthy brain [Friston KJ. The labile brain. I. Neuronal transients and nonlinear coupling. Philos Trans R Soc Lond B Biol Sci 2000;355:215-36], transient correlations seem to underlie functional cortical coordination [Singer W. Neuronal synchrony: a versatile code for the definition of relations? Neuron 1999;49-65; Rodriguez E, George N, Lachaux J, Martinerie J, Renault B, Varela F. Perception's shadow: long-distance synchronization of human brain activity. Nature 1999;397:430-3; Bressler SL, Kelso J. Cortical coordination dynamics and cognition. Trends Cogn Sci 2001;5:26-36]. Methods: Two periodic sources were simulated and the effects of transient source correlation on the spatial and temporal performance of the MEG beamformer were examined. Subsequently, the interdependencies of the reconstructed sources were investigated using coherence and phase synchronization analysis based on Mutual Information. Finally, two interacting nonlinear systems served as neuronal sources and their phase interdependencies were studied under realistic measurement conditions. Results: Both the spatial and the temporal beamformer source reconstructions were accurate as long as the transient source correlation did not exceed 30-40 percent of the duration of beamformer analysis. In addition, the interdependencies of periodic sources were preserved by the beamformer and phase synchronization of interacting nonlinear sources could be detected. Conclusions: MEG beamformer methods in conjunction with analysis of source interdependencies could provide accurate spatial and temporal descriptions of interactions between linear and nonlinear neuronal sources. Significance: The proposed methods can be used for the study of interactions between neuronal sources. © 2005 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Resumo:
Masking, adaptation, and summation paradigms have been used to investigate the characteristics of early spatio-temporal vision. Each has been taken to provide evidence for (i) oriented and (ii) nonoriented spatial-filtering mechanisms. However, subsequent findings suggest that the evidence for nonoriented mechanisms has been misinterpreted: those experiments might have revealed the characteristics of suppression (eg, gain control), not excitation, or merely the isotropic subunits of the oriented detecting mechanisms. To shed light on this, we used all three paradigms to focus on the ‘high-speed’ corner of spatio-temporal vision (low spatial frequency, high temporal frequency), where cross-oriented achromatic effects are greatest. We used flickering Gabor patches as targets and a 2IFC procedure for monocular, binocular, and dichoptic stimulus presentations. To account for our results, we devised a simple model involving an isotropic monocular filter-stage feeding orientation-tuned binocular filters. Both filter stages are adaptable, and their outputs are available to the decision stage following nonlinear contrast transduction. However, the monocular isotropic filters (i) adapt only to high-speed stimuli—consistent with a magnocellular subcortical substrate—and (ii) benefit decision making only for high-speed stimuli (ie, isotropic monocular outputs are available only for high-speed stimuli). According to this model, the visual processes revealed by masking, adaptation, and summation are related but not identical.
Resumo:
Vision must analyze the retinal image over both small and large areas to represent fine-scale spatial details and extensive textures. The long-range neuronal convergence that this implies might lead us to expect that contrast sensitivity should improve markedly with the contrast area of the image. But this is at odds with the orthodox view that contrast sensitivity is determined merely by probability summation over local independent detectors. To address this puzzle, I aimed to assess the summation of luminance contrast without the confounding influence of area-dependent internal noise. I measured contrast detection thresholds for novel Battenberg stimuli that had identical overall dimensions (to clamp the aggregation of noise) but were constructed from either dense or sparse arrays of micro-patterns. The results unveiled a three-stage visual hierarchy of contrast summation involving (i) spatial filtering, (ii) long-range summation of coherent textures, and (iii) pooling across orthogonal textures. Linear summation over local energy detectors was spatially extensive (as much as 16 cycles) at Stage 2, but the resulting model is also consistent with earlier classical results of contrast summation (J. G. Robson & N. Graham, 1981), where co-aggregation of internal noise has obscured these long-range interactions.
Resumo:
The visual system combines spatial signals from the two eyes to achieve single vision. But if binocular disparity is too large, this perceptual fusion gives way to diplopia. We studied and modelled the processes underlying fusion and the transition to diplopia. The likely basis for fusion is linear summation of inputs onto binocular cortical cells. Previous studies of perceived position, contrast matching and contrast discrimination imply the computation of a dynamicallyweighted sum, where the weights vary with relative contrast. For gratings, perceived contrast was almost constant across all disparities, and this can be modelled by allowing the ocular weights to increase with disparity (Zhou, Georgeson & Hess, 2014). However, when a single Gaussian-blurred edge was shown to each eye perceived blur was invariant with disparity (Georgeson & Wallis, ECVP 2012) – not consistent with linear summation (which predicts that perceived blur increases with disparity). This blur constancy is consistent with a multiplicative form of combination (the contrast-weighted geometric mean) but that is hard to reconcile with the evidence favouring linear combination. We describe a 2-stage spatial filtering model with linear binocular combination and suggest that nonlinear output transduction (eg. ‘half-squaring’) at each stage may account for the blur constancy.