989 resultados para spatial frequency
Resumo:
Question : Cette thèse comporte deux articles portant sur l’étude d’expressions faciales émotionnelles. Le processus de développement d’une nouvelle banque de stimuli émotionnels fait l’objet du premier article, alors que le deuxième article utilise cette banque pour étudier l’effet de l’anxiété de trait sur la reconnaissance des expressions statiques. Méthodes : Un total de 1088 clips émotionnels (34 acteurs X 8 émotions X 4 exemplaire) ont été alignés spatialement et temporellement de sorte que les yeux et le nez de chaque acteur occupent le même endroit dans toutes les vidéos. Les vidéos sont toutes d’une durée de 500ms et contiennent l’Apex de l’expression. La banque d’expressions statiques fut créée à partir de la dernière image des clips. Les stimuli ont été soumis à un processus de validation rigoureux. Dans la deuxième étude, les expressions statiques sont utilisées conjointement avec la méthode Bubbles dans le but d’étudier la reconnaissance des émotions chez des participants anxieux. Résultats : Dans la première étude, les meilleurs stimuli ont été sélectionnés [2 (statique & dynamique) X 8 (expressions) X 10 (acteurs)] et forment la banque d’expressions STOIC. Dans la deuxième étude, il est démontré que les individus présentant de l'anxiété de trait utilisent préférentiellement les basses fréquences spatiales de la région buccale du visage et ont une meilleure reconnaissance des expressions de peur. Discussion : La banque d’expressions faciales STOIC comporte des caractéristiques uniques qui font qu’elle se démarque des autres. Elle peut être téléchargée gratuitement, elle contient des vidéos naturelles et tous les stimuli ont été alignés, ce qui fait d’elle un outil de choix pour la communauté scientifique et les cliniciens. Les stimuli statiques de STOIC furent utilisés pour franchir une première étape dans la recherche sur la perception des émotions chez des individus présentant de l’anxiété de trait. Nous croyons que l’utilisation des basses fréquences est à la base des meilleures performances de ces individus, et que l’utilisation de ce type d’information visuelle désambigüise les expressions de peur et de surprise. Nous pensons également que c’est la névrose (chevauchement entre l'anxiété et la dépression), et non l’anxiété même qui est associée à de meilleures performances en reconnaissance d’expressions faciales de la peur. L’utilisation d’instruments mesurant ce concept devrait être envisagée dans de futures études.
Resumo:
Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal
Resumo:
Les stimuli naturels projetés sur nos rétines nous fournissent de l’information visuelle riche. Cette information varie le long de propriétés de « bas niveau » telles que la luminance, le contraste, et les fréquences spatiales. Alors qu’une partie de cette information atteint notre conscience, une autre partie est traitée dans le cerveau sans que nous en soyons conscients. Les propriétés de l’information influençant l’activité cérébrale et le comportement de manière consciente versus non-consciente demeurent toutefois peu connues. Cette question a été examinée dans les deux derniers articles de la présente thèse, en exploitant les techniques psychophysiques développées dans les deux premiers articles. Le premier article présente la boîte à outils SHINE (spectrum, histogram, and intensity normalization and equalization), développée afin de permettre le contrôle des propriétés de bas niveau de l'image dans MATLAB. Le deuxième article décrit et valide la technique dite des bulles fréquentielles, qui a été utilisée tout au long des études de cette thèse pour révéler les fréquences spatiales utilisées dans diverses tâches de perception des visages. Cette technique offre les avantages d’une haute résolution au niveau des fréquences spatiales ainsi que d’un faible biais expérimental. Le troisième et le quatrième article portent sur le traitement des fréquences spatiales en fonction de la conscience. Dans le premier cas, la méthode des bulles fréquentielles a été utilisée avec l'amorçage par répétition masquée dans le but d’identifier les fréquences spatiales corrélées avec les réponses comportementales des observateurs lors de la perception du genre de visages présentés de façon consciente versus non-consciente. Les résultats montrent que les mêmes fréquences spatiales influencent de façon significative les temps de réponse dans les deux conditions de conscience, mais dans des sens opposés. Dans le dernier article, la méthode des bulles fréquentielles a été combinée à des enregistrements intracrâniens et au Continuous Flash Suppression (Tsuchiya & Koch, 2005), dans le but de cartographier les fréquences spatiales qui modulent l'activation de structures spécifiques du cerveau (l'insula et l'amygdale) lors de la perception consciente versus non-consciente des expressions faciales émotionnelles. Dans les deux régions, les résultats montrent que la perception non-consciente s'effectue plus rapidement et s’appuie davantage sur les basses fréquences spatiales que la perception consciente. La contribution de cette thèse est donc double. D’une part, des contributions méthodologiques à la recherche en perception visuelle sont apportées par l'introduction de la boîte à outils SHINE ainsi que de la technique des bulles fréquentielles. D’autre part, des indications sur les « corrélats de la conscience » sont fournies à l’aide de deux approches différentes.
Resumo:
Dans le cadre de cette thèse, nous investiguons la capacité de chaque hémisphère cérébral à utiliser l’information visuelle disponible lors de la reconnaissance de mots. Il est généralement convenu que l’hémisphère gauche (HG) est mieux outillé pour la lecture que l’hémisphère droit (HD). De fait, les mécanismes visuoperceptifs utilisés en reconnaissance de mots se situent principalement dans l’HG (Cohen, Martinaud, Lemer et al., 2003). Puisque les lecteurs normaux utilisent optimalement des fréquences spatiales moyennes (environ 2,5 - 3 cycles par degré d’angle visuel) pour reconnaître les lettres, il est possible que l’HG les traite mieux que l’HD (Fiset, Gosselin, Blais et Arguin, 2006). Par ailleurs, les études portant sur la latéralisation hémisphérique utilisent habituellement un paradigme de présentation en périphérie visuelle. Il a été proposé que l’effet de l’excentricité visuelle sur la reconnaissance de mots soit inégal entre les hémichamps. Notamment, la première lettre est celle qui porte habituellement le plus d’information pour l’identification d’un mot. C’est aussi la plus excentrique lorsque le mot est présenté à l’hémichamp visuel gauche (HVG), ce qui peut nuire à son identification indépendamment des capacités de lecture de l’HD. L’objectif de la première étude est de déterminer le spectre de fréquences spatiales utilisé par l’HG et l’HD en reconnaissance de mots. Celui de la deuxième étude est d’explorer les biais créés par l’excentricité et la valeur informative des lettres lors de présentation en champs divisés. Premièrement, nous découvrons que le spectre de fréquences spatiales utilisé par les deux hémisphères en reconnaissance de mots est globalement similaire, même si l’HG requière moins d’information visuelle que l’HD pour atteindre le même niveau de performance. Étonnament toutefois, l’HD utilise de plus hautes fréquences spatiales pour identifier des mots plus longs. Deuxièmement, lors de présentation à l’HVG, nous trouvons que la 1re lettre, c’est à dire la plus excentrique, est parmi les mieux identifiées même lorsqu’elle a une plus grande valeur informative. Ceci est à l’encontre de l’hypothèse voulant que l’excentricité des lettres exerce un biais négatif pour les mots présentés à l’HVG. De façon intéressante, nos résultats suggèrent la présence d’une stratégie de traitement spécifique au lexique.
Resumo:
Polymer materials find application in optical storage technology, namely in the development of high information density and fast access type memories. A new polymer blend of methylene blue sensitized polyvinyl alcohol (PVA) and polyacrylic acid (PAA) in methanol is prepared and characterized and its comparison with methylene blue sensitized PVA in methanol and complexed methylene blue sensitized polyvinyl chloride (CMBPVC) is presented. The optical absorption spectra of the thin films of these polymers showed a strong and broad absorption region at 670-650 nm, matching the wavelength of the laser used. A very slow recovery of the dye on irradiation was observed when a 7:3 blend of polyvinyl alcohol/polyacrylic acid at a pHof 3.8 and a sensitizer concentration of 4.67 10 5 g/ml were used. A diffraction efficiency of up to 20% was observed for the MBPVA/alcohol system and an energetic sensitivity of 2000 mJ/cm2 was obtained in the photosensitive films with a spatial frequency of 588 lines/mm.
Resumo:
Polymer materials find application in optical storage technology, namely in the development of high information density and fast access type memories. A new polymer blend of methylene blue sensitized polyvinyl alcohol (PVA) and polyacrylic acid (PAA) in methanol is prepared and characterized and its comparison with methylene blue sensitized PVA in methanol and complexed methylene blue sensitized polyvinyl chloride (CMBPVC) is presented. The optical absorption spectra of the thin films of these polymers showed a strong and broad absorption region at 670-650 nm, matching the wavelength of the laser used. A very slow recovery of the dye on irradiation was observed when a 7:3 blend of polyvinyl alcohol/polyacrylic acid at a pHof 3.8 and a sensitizer concentration of 4.67 10 5 g/ml were used. A diffraction efficiency of up to 20% was observed for the MBPVA/alcohol system and an energetic sensitivity of 2000 mJ/cm2 was obtained in the photosensitive films with a spatial frequency of 588 lines/mm.
Resumo:
Super Resolution problem is an inverse problem and refers to the process of producing a High resolution (HR) image, making use of one or more Low Resolution (LR) observations. It includes up sampling the image, thereby, increasing the maximum spatial frequency and removing degradations that arise during the image capture namely aliasing and blurring. The work presented in this thesis is based on learning based single image super-resolution. In learning based super-resolution algorithms, a training set or database of available HR images are used to construct the HR image of an image captured using a LR camera. In the training set, images are stored as patches or coefficients of feature representations like wavelet transform, DCT, etc. Single frame image super-resolution can be used in applications where database of HR images are available. The advantage of this method is that by skilfully creating a database of suitable training images, one can improve the quality of the super-resolved image. A new super resolution method based on wavelet transform is developed and it is better than conventional wavelet transform based methods and standard interpolation methods. Super-resolution techniques based on skewed anisotropic transform called directionlet transform are developed to convert a low resolution image which is of small size into a high resolution image of large size. Super-resolution algorithm not only increases the size, but also reduces the degradations occurred during the process of capturing image. This method outperforms the standard interpolation methods and the wavelet methods, both visually and in terms of SNR values. Artifacts like aliasing and ringing effects are also eliminated in this method. The super-resolution methods are implemented using, both critically sampled and over sampled directionlets. The conventional directionlet transform is computationally complex. Hence lifting scheme is used for implementation of directionlets. The new single image super-resolution method based on lifting scheme reduces computational complexity and thereby reduces computation time. The quality of the super resolved image depends on the type of wavelet basis used. A study is conducted to find the effect of different wavelets on the single image super-resolution method. Finally this new method implemented on grey images is extended to colour images and noisy images
Resumo:
As part of the European Commission (EC)'s revision of the Sewage Sludge Directive and the development of a Biowaste Directive, there was recognition of the difficulty of comparing data from Member States (MSs) because of differences in sampling and analytical procedures. The 'HORIZONTAL' initiative, funded by the EC and MSs, seeks to address these differences in approach and to produce standardised procedures in the form of CEN standards. This article is a preliminary investigation into aspects of the sampling of biosolids, composts and soils to which there is a history of biosolid application. The article provides information on the measurement uncertainty associated with sampling from heaps, large bags and pipes and soils in the landscape under a limited set of conditions, using sampling approaches in space and time and sample numbers based on procedures widely used in the relevant industries and when sampling similar materials. These preliminary results suggest that considerably more information is required before the appropriate sample design, optimum number of samples, number of samples comprising a composite, and temporal and spatial frequency of sampling might be recommended to achieve consistent results of a high level of precision and confidence. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Airborne scanning laser altimetry (LiDAR) is an important new data source for river flood modelling. LiDAR can give dense and accurate DTMs of floodplains for use as model bathymetry. Spatial resolutions of 0.5m or less are possible, with a height accuracy of 0.15m. LiDAR gives a Digital Surface Model (DSM), so vegetation removal software (e.g. TERRASCAN) must be used to obtain a DTM. An example used to illustrate the current state of the art will be the LiDAR data provided by the EA, which has been processed by their in-house software to convert the raw data to a ground DTM and separate vegetation height map. Their method distinguishes trees from buildings on the basis of object size. EA data products include the DTM with or without buildings removed, a vegetation height map, a DTM with bridges removed, etc. Most vegetation removal software ignores short vegetation less than say 1m high. We have attempted to extend vegetation height measurement to short vegetation using local height texture. Typically most of a floodplain may be covered in such vegetation. The idea is to assign friction coefficients depending on local vegetation height, so that friction is spatially varying. This obviates the need to calibrate a global floodplain friction coefficient. It’s not clear at present if the method is useful, but it’s worth testing further. The LiDAR DTM is usually determined by looking for local minima in the raw data, then interpolating between these to form a space-filling height surface. This is a low pass filtering operation, in which objects of high spatial frequency such as buildings, river embankments and walls may be incorrectly classed as vegetation. The problem is particularly acute in urban areas. A solution may be to apply pattern recognition techniques to LiDAR height data fused with other data types such as LiDAR intensity or multispectral CASI data. We are attempting to use digital map data (Mastermap structured topography data) to help to distinguish buildings from trees, and roads from areas of short vegetation. The problems involved in doing this will be discussed. A related problem of how best to merge historic river cross-section data with a LiDAR DTM will also be considered. LiDAR data may also be used to help generate a finite element mesh. In rural area we have decomposed a floodplain mesh according to taller vegetation features such as hedges and trees, so that e.g. hedge elements can be assigned higher friction coefficients than those in adjacent fields. We are attempting to extend this approach to urban area, so that the mesh is decomposed in the vicinity of buildings, roads, etc as well as trees and hedges. A dominant points algorithm is used to identify points of high curvature on a building or road, which act as initial nodes in the meshing process. A difficulty is that the resulting mesh may contain a very large number of nodes. However, the mesh generated may be useful to allow a high resolution FE model to act as a benchmark for a more practical lower resolution model. A further problem discussed will be how best to exploit data redundancy due to the high resolution of the LiDAR compared to that of a typical flood model. Problems occur if features have dimensions smaller than the model cell size e.g. for a 5m-wide embankment within a raster grid model with 15m cell size, the maximum height of the embankment locally could be assigned to each cell covering the embankment. But how could a 5m-wide ditch be represented? Again, this redundancy has been exploited to improve wetting/drying algorithms using the sub-grid-scale LiDAR heights within finite elements at the waterline.
Resumo:
We explored the dependency of the saccadic remote distractor effect (RDE) on the spatial frequency content of target and distractor Gabor patches. A robust RDE was obtained with low-medium spatial frequency distractors, regardless of the spatial frequency of the tat-get. High spatial frequency distractors interfered to a similar extent when the target was of the same spatial frequency. We developed a quantitative model based on lateral inhibition within an oculomotor decision unit. This lateral inhibition mechanism cannot account for the interaction observed between target and distractor spatial frequency, pointing to the existence of channel interactions at an earlier level. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
In numerical weather prediction (NWP) data assimilation (DA) methods are used to combine available observations with numerical model estimates. This is done by minimising measures of error on both observations and model estimates with more weight given to data that can be more trusted. For any DA method an estimate of the initial forecast error covariance matrix is required. For convective scale data assimilation, however, the properties of the error covariances are not well understood. An effective way to investigate covariance properties in the presence of convection is to use an ensemble-based method for which an estimate of the error covariance is readily available at each time step. In this work, we investigate the performance of the ensemble square root filter (EnSRF) in the presence of cloud growth applied to an idealised 1D convective column model of the atmosphere. We show that the EnSRF performs well in capturing cloud growth, but the ensemble does not cope well with discontinuities introduced into the system by parameterised rain. The state estimates lose accuracy, and more importantly the ensemble is unable to capture the spread (variance) of the estimates correctly. We also find, counter-intuitively, that by reducing the spatial frequency of observations and/or the accuracy of the observations, the ensemble is able to capture the states and their variability successfully across all regimes.
Resumo:
When human observers are exposed to even slight motion signals followed by brief visual transients—stimuli containing no detectable coherent motion signals—they perceive large and salient illusory jumps. This novel effect, which we call “high phi”, challenges well-entrenched assumptions about the perception of motion, namely the minimal-motion principle and the breakdown of coherent motion perception with steps above an upper limit. Our experiments with transients such as texture randomization or contrast reversal show that the magnitude of the jump depends on spatial frequency and transient duration, but not on the speed of the inducing motion signals, and the direction of the jump depends on the duration of the inducer. Jump magnitude is robust across jump directions and different types of transient. In addition, when a texture is actually displaced by a large step beyond dmax, a breakdown of coherent motion perception is expected, but in the presence of an inducer observers again perceive coherent displacements at or just above dmax. In sum, across a large variety of stimuli, we find that when incoherent motion noise is preceded by a small bias, instead of perceiving little or no motion, as suggested by the minimal-motion principle, observers perceive jumps whose amplitude closely follows their own dmax limits.
Resumo:
A method for improving the accuracy of surface shape measurement by multiwavelength holography is presented. In our holographic setup, a Bi12TiO20 photorefractive crystal was the holographic recording medium, and a multimode diode laser emitting in the red region was the light source in a two-wave mixing scheme. on employing such lasers the resulting holographic image appears covered with interference fringes corresponding to the object relief, and the interferogram spatial frequency is proportional to the diode laser's free spectral range (FSR). Our method consists in increasing the effective free spectral range of the laser by positioning a Fabry-Perot etalon at the laser output for mode selection. As larger effective values of the laser FSR were achieved, higher-spatial-frequency interferograms were obtained and therefore more sensitive and accurate measurements were performed. The quantitative evaluation of the interferograms was made through the phase-stepping technique, and the phase map unwrapping was carried out through the cellular-automata method. For a given surface, shape measurements with different interferogram spatial frequencies were performed and compared with respect to measurement noise and visual inspection. (c) 2007 Society of Photo-Optical Instrumentation Engineers.
Resumo:
Relief Bragg gratings were recorded on the surface of Ga-Ge-S glass samples by interference of two UV laser beams at 351 nm, Scanning force microscopy was used to perform a 3D image analysis of the resulting surface topography, which shows the superposition of an imprinted grating over the base topography of the glass. An important question regarding the efficiency of the grating is to determine to what extent the base topography reduces the intended coherent scattering of the grating because of its stochastic character. To answer this question we separated both base and grating structures by Fourier filtering, examined both spatial frequency and roughness, and determined the correlation. (C) 2001 Elsevier B.V. B.V. All rights reserved.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)