957 resultados para Human eye


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Quizás el Código Morse, inventado en 1838 para su uso en la telegrafía, es uno de los primeros ejemplos de la utilización práctica de la compresión de datos [1], donde las letras más comunes del alfabeto son codificadas con códigos más cortos que las demás. A partir de 1940 y tras el desarrollo de la teoría de la información y la creación de los primeros ordenadores, la compresión de la información ha sido un reto constante y fundamental entre los campos de trabajo de investigadores de todo tipo. Cuanto mayor es nuestra comprensión sobre el significado de la información, mayor es nuestro éxito comprimiéndola. En el caso de la información multimedia, su naturaleza permite la compresión con pérdidas, alcanzando así cotas de compresión imposibles para los algoritmos sin pérdidas. Estos “recientes” algoritmos con pérdidas han estado mayoritariamente basados en transformación de la información al dominio de la frecuencia y en la eliminación de parte de la información en dicho dominio. Transformar al dominio de la frecuencia posee ventajas pero también involucra unos costes computacionales inevitables. Esta tesis presenta un nuevo algoritmo de compresión multimedia llamado “LHE” (Logarithmical Hopping Encoding) que no requiere transformación al dominio de la frecuencia, sino que trabaja en el dominio del espacio. Esto lo convierte en un algoritmo lineal de reducida complejidad computacional. Los resultados del algoritmo son prometedores, superando al estándar JPEG en calidad y velocidad. Para ello el algoritmo utiliza como base la respuesta fisiológica del ojo humano ante el estímulo luminoso. El ojo, al igual que el resto de los sentidos, responde al logaritmo de la señal de acuerdo a la ley de Weber. El algoritmo se compone de varias etapas. Una de ellas es la medición de la “Relevancia Perceptual”, una nueva métrica que nos va a permitir medir la relevancia que tiene la información en la mente del sujeto y en base a la misma, degradar en mayor o menor medida su contenido, a través de lo que he llamado “sub-muestreado elástico”. La etapa de sub-muestreado elástico constituye una nueva técnica sin precedentes en el tratamiento digital de imágenes. Permite tomar más o menos muestras en diferentes áreas de una imagen en función de su relevancia perceptual. En esta tesis se dan los primeros pasos para la elaboración de lo que puede llegar a ser un nuevo formato estándar de compresión multimedia (imagen, video y audio) libre de patentes y de alto rendimiento tanto en velocidad como en calidad. ABSTRACT The Morse code, invented in 1838 for use in telegraphy, is one of the first examples of the practical use of data compression [1], where the most common letters of the alphabet are coded shorter than the rest of codes. From 1940 and after the development of the theory of information and the creation of the first computers, compression of information has been a constant and fundamental challenge among any type of researchers. The greater our understanding of the meaning of information, the greater our success at compressing. In the case of multimedia information, its nature allows lossy compression, reaching impossible compression rates compared with lossless algorithms. These "recent" lossy algorithms have been mainly based on information transformation to frequency domain and elimination of some of the information in that domain. Transforming the frequency domain has advantages but also involves inevitable computational costs. This thesis introduces a new multimedia compression algorithm called "LHE" (logarithmical Hopping Encoding) that does not require transformation to frequency domain, but works in the space domain. This feature makes LHE a linear algorithm of reduced computational complexity. The results of the algorithm are promising, outperforming the JPEG standard in quality and speed. The basis of the algorithm is the physiological response of the human eye to the light stimulus. The eye, like other senses, responds to the logarithm of the signal according with Weber law. The algorithm consists of several stages. One is the measurement of "perceptual relevance," a new metric that will allow us to measure the relevance of information in the subject's mind and based on it; degrade accordingly their contents, through what I have called "elastic downsampling". Elastic downsampling stage is an unprecedented new technique in digital image processing. It lets take more or less samples in different areas of an image based on their perceptual relevance. This thesis introduces the first steps for the development of what may become a new standard multimedia compression format (image, video and audio) free of patents and high performance in both speed and quality.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Video Quality Assessment needs to correspond to human perception. Pixel-based metrics (PSNR or MSE) fail in many circumstances for not taking into account the spatio-temporal property of human's visual perception. In this paper we propose a new pixel-weighted method to improve video quality metrics for artifacts evaluation. The method applies a psychovisual model based on motion, level of detail, pixel location and the appearance of human faces, which approximate the quality to the human eye's response. Subjective tests were developed to adjust the psychovisual model for demonstrating the noticeable improvement of an algorithm when weighting the pixels according to the factors analyzed instead of treating them equally. The analysis developed demonstrates the necessity of models adapted to the specific visualization of contents and the model presents an advance in quality to be applied over sequences when a determined artifact is analyzed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La medida de calidad de vídeo sigue siendo necesaria para definir los criterios que caracterizan una señal que cumpla los requisitos de visionado impuestos por el usuario. Las nuevas tecnologías, como el vídeo 3D estereoscópico o formatos más allá de la alta definición, imponen nuevos criterios que deben ser analizadas para obtener la mayor satisfacción posible del usuario. Entre los problemas detectados durante el desarrollo de esta tesis doctoral se han determinado fenómenos que afectan a distintas fases de la cadena de producción audiovisual y tipo de contenido variado. En primer lugar, el proceso de generación de contenidos debe encontrarse controlado mediante parámetros que eviten que se produzca el disconfort visual y, consecuentemente, fatiga visual, especialmente en lo relativo a contenidos de 3D estereoscópico, tanto de animación como de acción real. Por otro lado, la medida de calidad relativa a la fase de compresión de vídeo emplea métricas que en ocasiones no se encuentran adaptadas a la percepción del usuario. El empleo de modelos psicovisuales y diagramas de atención visual permitirían ponderar las áreas de la imagen de manera que se preste mayor importancia a los píxeles que el usuario enfocará con mayor probabilidad. Estos dos bloques se relacionan a través de la definición del término saliencia. Saliencia es la capacidad del sistema visual para caracterizar una imagen visualizada ponderando las áreas que más atractivas resultan al ojo humano. La saliencia en generación de contenidos estereoscópicos se refiere principalmente a la profundidad simulada mediante la ilusión óptica, medida en términos de distancia del objeto virtual al ojo humano. Sin embargo, en vídeo bidimensional, la saliencia no se basa en la profundidad, sino en otros elementos adicionales, como el movimiento, el nivel de detalle, la posición de los píxeles o la aparición de caras, que serán los factores básicos que compondrán el modelo de atención visual desarrollado. Con el objetivo de detectar las características de una secuencia de vídeo estereoscópico que, con mayor probabilidad, pueden generar disconfort visual, se consultó la extensa literatura relativa a este tema y se realizaron unas pruebas subjetivas preliminares con usuarios. De esta forma, se llegó a la conclusión de que se producía disconfort en los casos en que se producía un cambio abrupto en la distribución de profundidades simuladas de la imagen, aparte de otras degradaciones como la denominada “violación de ventana”. A través de nuevas pruebas subjetivas centradas en analizar estos efectos con diferentes distribuciones de profundidades, se trataron de concretar los parámetros que definían esta imagen. Los resultados de las pruebas demuestran que los cambios abruptos en imágenes se producen en entornos con movimientos y disparidades negativas elevadas que producen interferencias en los procesos de acomodación y vergencia del ojo humano, así como una necesidad en el aumento de los tiempos de enfoque del cristalino. En la mejora de las métricas de calidad a través de modelos que se adaptan al sistema visual humano, se realizaron también pruebas subjetivas que ayudaron a determinar la importancia de cada uno de los factores a la hora de enmascarar una determinada degradación. Los resultados demuestran una ligera mejora en los resultados obtenidos al aplicar máscaras de ponderación y atención visual, los cuales aproximan los parámetros de calidad objetiva a la respuesta del ojo humano. ABSTRACT Video quality assessment is still a necessary tool for defining the criteria to characterize a signal with the viewing requirements imposed by the final user. New technologies, such as 3D stereoscopic video and formats of HD and beyond HD oblige to develop new analysis of video features for obtaining the highest user’s satisfaction. Among the problems detected during the process of this doctoral thesis, it has been determined that some phenomena affect to different phases in the audiovisual production chain, apart from the type of content. On first instance, the generation of contents process should be enough controlled through parameters that avoid the occurrence of visual discomfort in observer’s eye, and consequently, visual fatigue. It is especially necessary controlling sequences of stereoscopic 3D, with both animation and live-action contents. On the other hand, video quality assessment, related to compression processes, should be improved because some objective metrics are adapted to user’s perception. The use of psychovisual models and visual attention diagrams allow the weighting of image regions of interest, giving more importance to the areas which the user will focus most probably. These two work fields are related together through the definition of the term saliency. Saliency is the capacity of human visual system for characterizing an image, highlighting the areas which result more attractive to the human eye. Saliency in generation of 3DTV contents refers mainly to the simulated depth of the optic illusion, i.e. the distance from the virtual object to the human eye. On the other hand, saliency is not based on virtual depth, but on other features, such as motion, level of detail, position of pixels in the frame or face detection, which are the basic features that are part of the developed visual attention model, as demonstrated with tests. Extensive literature involving visual comfort assessment was looked up, and the development of new preliminary subjective assessment with users was performed, in order to detect the features that increase the probability of discomfort to occur. With this methodology, the conclusions drawn confirmed that one common source of visual discomfort was when an abrupt change of disparity happened in video transitions, apart from other degradations, such as window violation. New quality assessment was performed to quantify the distribution of disparities over different sequences. The results confirmed that abrupt changes in negative parallax environment produce accommodation-vergence mismatches derived from the increasing time for human crystalline to focus the virtual objects. On the other side, for developing metrics that adapt to human visual system, additional subjective tests were developed to determine the importance of each factor, which masks a concrete distortion. Results demonstrated slight improvement after applying visual attention to objective metrics. This process of weighing pixels approximates the quality results to human eye’s response.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La imagen fotográfica es un bloque espacio tiempo congelado, un fragmento referido al antes y el después de algo. Al contemplar una fotografía de un interior doméstico, descubrimos un entretejido sutil entre el habitante y su hábitat. Somos capaces de recaer en más detalles de los que el ojo humano puede apreciar en su visión cotidiana, siempre ligada al devenir espacio temporal. El acto de fotografiar el hogar, de congelar unidades habitadas infinitesimales, se revela como una manifestación radical del modo unipersonal de habitar de cada fotógrafo, profesional o aficionado, y por extensión, dado que hoy todos somos fotógrafos; de cada habitante. Por un lado, la fotografía se piensa aquí como herramienta, capaz de desvelar, de poner en el mundo, los elementos, percepciones y acontecimientos, que subyacen imbricados en la construcción del hogar. Por otro, la imagen se entiende como medio de expresión y de comunicación, como el lenguaje universal de nuestro tiempo, por todos conocido y utilizado. En este momento de interconexión máxima, entre redes, datos y capas de cognición, de velocidad y aceleración, esta tesis doctoral se plantea como una vuelta a la reflexión, a la contemplación del objeto imagen, desde la certeza de que para que ésta hable hay que darle tiempo. Así, la investigación hay que entenderla desde una base ontológica y fenomenológica; desde la experiencia del ser que habita un entorno concreto y determinado. Se enmarca en el actual entorno socio cultural de occidente, se busca desvelar el significado y modo de habitar del habitante común, poniendo de manifiesto aquello que acontece para que una casa cualquiera, de un habitante cualquiera, devenga hogar. Los primeros indicios sobre el tema surgirán del análisis y la reinterpretación hermenéutica de un atlas de imágenes del habitar: cuerpo de imágenes reunido a partir de series fotográficas de hogares, de habitantes anónimos, puestos a luz por la mirada de un grupo de artistas. Posteriormente, ponemos a prueba el conocimiento adquirido en el análisis anterior, mientras que expandimos la investigación hacia el sentir del habitante común, mediante la realización de tres experimentos participativos, o estudios de campo cualitativos. Los resultados, de ambos grupos de casos de estudio, se compilan, organizan y estructuran en una taxonomía del habitar. Esta taxonomía está compuesta por cuarenta y siete parámetros, que explicitan la naturaleza compleja del hogar del siglo XXI. Este hogar es entendido como un constructo personal de cada habitante, un proceso que acontece en el tiempo y en el espacio, y que entraña la propia construcción del habitante. Esta extensa taxonomía se organiza según tres ámbitos del ser humano, en el primero se expresan los factores relacionados con el modo de "estar físicamente" en el hogar, incluyendo: al propio habitante, la casa como espacio arquitectónico y como materialidad: objetos, muebles, iconos y símbolos que pueblan el hogar. En segundo lugar, se manifiestan los parámetros relacionados con el modo de “percibir”: por un lado, aquello que se deriva de lo que se ve, y por otro, lo que se deriva de aquello que no se ve, pero se siente. En tercer lugar, se explicitan los factores relativos al habitante que "crea/juega" su hogar, quién por un lado, es en el mundo actuando, y que por otro, siente el mundo construyéndolo mediante una serie de relaciones que establece con él. Finalmente, la investigación intenta revelar las sinergias, conexiones y relaciones, entre todos estos elementos extraídos del sentir del habitante común, y que fueron inducidos mediante el análisis y reinterpretación de los casos de estudio, poniendo de manifiesto un orden de cosas en el habitar occidental contemporáneo. ABSTRACT The photographic image is a frozen space time block, a fragment referred to a something before and after. When we stare at the photography of domestic interiors we discover a subtle interweaving between the inhabitant and her habitat. We are able to acknowledge infinite more details than the human eye, in its continuous quotidian vision always linked to the space time progression, appreciates. The act of photographing the home, of freezing infinitesimal inhabited units, reveals as a radical statement of the concept of inhabiting for each photographer, professional or amateur, and by extension, as we today all are photographers, for each inhabitant. On the one hand, photography is here conceived as a tool that is capable of revealing, "of placing in the world" the elements, perceptions and happenings that underlie imbricated in the construction of a home. On the other hand, image is thought as an expression and communication media, as the universal language of our time (as far as it is known and used by all of us). In this precise moment of maximum interconnection between networks, data and cognitive layers; of speed and acceleration, this PhD Dissertation is conceived as a return to reflection; to the contemplation of object image, from the certainty of its need of time for talking. Therefore, this research from an ontological and phenomenological base; from the experience of the self who inhabits a determined and concrete environment, that of the western countries at the present, pursues to unveil the meaning and way of inhabiting of a common dweller and manifest what conforms the transformation of any house, of any inhabitant into a home. The first clues will arise from the analysis and hermeneutical reinterpretation of the Atlas of inhabiting; an assembled body of images of anonymous inhabitants houses, brought into life through a group of artist´s glance. Afterwards, we will test the analysis´s acquired knowledge, while extending the investigation to the feel of the common inhabitant (and no longer the expert^ artist) through the execution of three participative experiments conceived as qualitative field works. The results of both case study groups, will be compiled, organized and structured in a taxonomy of the inhabiting. This taxonomy is composed by forty seven parameters that explicitly state the complex nature of the XXI century home, regarded as a personal construct of every single inhabitant, as a process that happens through time and space and that entails the construction of the inhabitant. This wide taxonomy is organized regarding three spheres of the human being, In first place, those elements related to the way of “physically being” at home are expressed, including: the inhabitant its self, the house as architectural space and as materiality: objects, furniture, icons and symbols that fill the home. In second place, parameters related to the way of “perceiving“ are manifested; on the one hand, those that derive from what we see; on the other hand, those that derive from what we do not see, but feel. In third place, those factors deriving from the inhabitant as a home "creator/player" who is acting in the world and feeling the world while constructing it through a myriad of relationships he establishes with it. Finally, the investigation tries to reveal the synergies, connections and relations between all these elements extracted from the feelings of the common inhabitant, induced through the analysis and reinterpretation of the case studies, and therefore exposing a state of things belonging to western world at present.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Póster presentado en SPIE Photonics Europe, Brussels, 16-19 April 2012.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose: To define a range of normality for the vectorial parameters Ocular Residual Astigmatism (ORA) and topography disparity (TD) and to evaluate their relationship with visual, refractive, anterior and posterior corneal curvature, pachymetric and corneal volume data in normal healthy eyes. Methods: This study comprised a total of 101 consecutive normal healthy eyes of 101 patients ranging in age from 15 to 64 years old. In all cases, a complete corneal analysis was performed using a Scheimpflug photography-based topography system (Pentacam system Oculus Optikgeräte GmbH). Anterior corneal topographic data were imported from the Pentacam system to the iASSORT software (ASSORT Pty. Ltd.), which allowed the calculation of the ocular residual astigmatism (ORA) and topography disparity (TD). Linear regression analysis was used for obtaining a linear expression relating ORA and posterior corneal astigmatism (PCA). Results: Mean magnitude of ORA was 0.79 D (SD: 0.43), with a normality range from 0 to 1.63 D. 90 eyes (89.1%) showed against-the-rule ORA. A weak although statistically significant correlation was found between the magnitudes of posterior corneal astigmatism and ORA (r = 0.34, p < 0.01). Regression analysis showed the presence of a linear relationship between these two variables, although with a very limited predictability (R2: 0.08). Mean magnitude of TD was 0.89 D (SD: 0.50), with a normality range from 0 to 1.87 D. Conclusion: The magnitude of the vector parameters ORA and TD is lower than 1.9 D in the healthy human eye.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This dissertation was primarily engaged in the study of linear and organic perspective applied to the drawing of landscape, considering the perspective as a fundamental tool in order to graphically materialize sensory experiences offered by the landscape / place to be drawn. The methodology consisted initially in the investigation of perspective theories and perspective representation methods applied to landscape drawing, followed by practical application to a specific case. Thus, within the linear perspective were analyzed and explained: the visual framing, the methods of representation based on the descriptive geometry and also the design of shadows and reflections within the shadows. In the context of organic perspective were analyzed and described techniques utilizing depth of field, the color, or fading and overlapping and light-dark so as to add depth to the drawing. It was also explained a set of materials, printing techniques and resources, which by means of practical examples executed by different artists over time, show the perspectives’ drawings and application of theory. Finally, a set of original drawings was prepared in order to represent a place of a specific case, using for this purpose the theories and methods of linear and organic perspective, using different materials and printing techniques. The drawings were framed under the "project design", starting with the horizontal and vertical projections of a landscape architecture design to provide different views of the proposed space. It can be concluded that the techniques and methods described and exemplified, were suitable, with some adjustments, to the purpose it was intended, in particular in the landscape design conception, bringing to reality the pictorial sense world perceived by the human eye

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A inauguração de um olhar humano do índio tupinambá, eis o que encontramos em Jean de Léry (1534-1611), um teólogo e missionário huguenote que aporta em terras brasileiras no século XVI com o propósito de auxiliar na implantação de uma colônia francesa e de pregar o evangelho, tanto aos franceses que aqui estavam, quanto aos índios tupinambás. O viajante em sua obra História de uma Viagem feita à Terra do Brasil, também chamada América apresenta um olhar humano do Outro que, além disto, também se apresenta como uma possibilidade de compreensão do mesmo. Os motivos que constroem esta perspectiva é o que analisamos neste trabalho, a partir do conceito de heterologia proposto por Michel de Certeau. Nossa tese afirma que esta hermenêutica do Outro em Jean de Léry é determinada pelo sistema de pensamento teológico calvinista. A circularidade hermenêutica do viajante francês está condicionada a Escritura Sagrada, dela parte, a ela retorna. A heterologia proposta por Jean de Léry se constitui em uma ciência do Outro construída a partir do sistema de pensamento teológico calvinista.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose: A clinical evaluation of the Grand Seiko Auto Ref/Keratometer WAM-5500 (Japan) was performed to evaluate validity and repeatability compared with non-cycloplegic subjective refraction and Javal–Schiotz keratometry. An investigation into the dynamic recording capabilities of the instrument was also conducted. Methods: Refractive error measurements were obtained from 150 eyes of 75 subjects (aged 25.12 ± 9.03 years), subjectively by a masked optometrist, and objectively with the WAM-5500 at a second session. Keratometry measurements from the WAM-5500 were compared to Javal–Schiotz readings. Intratest variability was examined on all subjects, whilst intertest variability was assessed on a subgroup of 44 eyes 7–14 days after the initial objective measures. The accuracy of the dynamic recording mode of the instrument and its tolerance to longitudinal movement was evaluated using a model eye. An additional evaluation of the dynamic mode was performed using a human eye in relaxed and accommodated states. Results: Refractive error determined by the WAM-5500 was found to be very similar (p = 0.77) to subjective refraction (difference, -0.01 ± 0.38 D). The instrument was accurate and reliable over a wide range of refractive errors (-6.38 to +4.88 D). WAM-5500 keratometry values were steeper by approximately 0.05 mm in both the vertical and horizontal meridians. High intertest repeatability was demonstrated for all parameters measured: for sphere, cylinder power and MSE, over 90% of retest values fell within ±0.50 D of initial testing. In dynamic (high-speed) mode, the root-mean-square of the fluctuations was 0.005 ± 0.0005 D and a high level of recording accuracy was maintained when the measurement ring was significantly blurred by longitudinal movement of the instrument head. Conclusion: The WAM-5500 Auto Ref/Keratometer represents a reliable and valid objective refraction tool for general optometric practice, with important additional features allowing pupil size determination and easy conversion into high-speed mode, increasing its usefulness post-surgically following accommodating intra-ocular lens implantation, and as a research tool in the study of accommodation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ocular dimensions are widely recognised as key variants of refractive error. Previously, accurate depiction of eye shape in vivo was largely restricted by limitations in the imaging techniques available. This thesis describes unique applications of the recently introduced 3-dimensional magnetic resonance imaging (MRI) approach to evaluate human eye shape in a group of young adult subjects (n=76) with a range of ametropia (MSE= -19.76 to +4.38D). Specific MRI derived parameters of ocular shape are then correlated with measures of visual function. Key findings include the significant homogeneity of ocular volume in the anterior eye for a range of refractive errors, whilst significant volume changes occur in the posterior eye as a function of ametropia. Anterior vs. posterior eye differences have also been shown through evaluations of equivalent spherical radius; the posterior 25% cap of the eye was shown to be relatively steeper in myopes compared to emmetropes. Further analyses showed differences in retinal quadrant profiles; assessments of the maximum distance from the retinal surface to the presumed visual axes showed exaggerated growth of the temporal quadrant in myopic eyes. Comparisons of retinal contour values derived from transformation of peripheral refraction data were made with MRI; flatter retinal curvature values were noted when using the MRI technique. A distinctive feature of this work is the evaluation of the relationship between ocular structure and visual function. Multiple aspects of visual function were evaluated through several vehicles: multifocal electroretinogram testing, visual field sensitivity testing, and the use of psychophysical methods to determine ganglion cell density. The results show that many quadrantic structural and functional variations exist. In general, the data could not demonstrate a significant correlation between visual function and associated measures of ocular conformation either within or between myopic and emmetropic groups.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The thesis will show how to equalise the effect of quantal noise across spatial frequencies by keeping the retinal flux (If-2) constant. In addition, quantal noise is used to study the effect of grating area and spatial frequency on contrast sensitivity resulting in the extension of the new contrast detection model describing the human contrast detection system as a simple image processor. According to the model the human contrast detection system comprises low-pass filtering due to ocular optics, addition of light dependent noise at the event of quantal absorption, high-pass filtering due to the neural visual pathways, addition of internal neural noise, after which detection takes place by a local matched filter, whose sampling efficiency decreases as grating area is increased. Furthermore, this work will demonstrate how to extract both the optical and neural modulation transfer functions of the human eye. The neural transfer function is found to be proportional to spatial frequency up to the local cut-off frequency at eccentricities of 0 - 37 deg across the visual field. The optical transfer function of the human eye is proposed to be more affected by the Stiles-Crawford -effect than generally assumed in the literature. Similarly, this work questions the prevailing ideas about the factors limiting peripheral vision by showing that peripheral optical acts as a low-pass filter in normal viewing conditions, and therefore the effect of peripheral optics is worse than generally assumed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Previous research has indicated that schematic eyes incorporating aspheric surfaces but lacking gradient index are unable to model ocular spherical aberration and peripheral astigmatism simultaneously. This limits their use as wide-angle schematic eyes. This thesis challenges this assumption by investigating the flexibility of schematic eyes comprising aspheric optical surfaces and homogeneous optical media. The full variation of ocular component dimensions found in human eyes was established from the literature. Schematic eye parameter variants were limited to these dimensions. The levels of spherical aberration and peripheral astigmatism modelled by these schematic eyes were compared to the range of measured levels. These were also established from the literature. To simplify comparison of modelled and measured data, single value parameters were introduced; the spherical aberration function (SAF), and peripheral astigmatism function (PAF). Some ocular components variations produced a wide range of aberrations without exceeding the limits of human ocular components. The effect of ocular component variations on coma was also investigated, but no comparison could be made as no empirical data exists. It was demonstrated that by combined manipulation of a number of parameters in the schematic eyes it was possible to model all levels of ocular spherical aberration and peripheral astigmatism. However, the unique parameters of a human eye could not be obtained in this way, as a number of models could be used to produce the same spherical aberration and peripheral astigmatism, while giving very different coma levels. It was concluded that these schematic eyes are flexible enough to model the monochromatic aberrations tested, the absence of gradient index being compensated for by altering the asphericity of one or more surfaces.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis studied the effect of (i) the number of grating components and (ii) parameter randomisation on root-mean-square (r.m.s.) contrast sensitivity and spatial integration. The effectiveness of spatial integration without external spatial noise depended on the number of equally spaced orientation components in the sum of gratings. The critical area marking the saturation of spatial integration was found to decrease when the number of components increased from 1 to 5-6 but increased again at 8-16 components. The critical area behaved similarly as a function of the number of grating components when stimuli consisted of 3, 6 or 16 components with different orientations and/or phases embedded in spatial noise. Spatial integration seemed to depend on the global Fourier structure of the stimulus. Spatial integration was similar for sums of two vertical cosine or sine gratings with various Michelson contrasts in noise. The critical area for a grating sum was found to be a sum of logarithmic critical areas for the component gratings weighted by their relative Michelson contrasts. The human visual system was modelled as a simple image processor where the visual stimuli is first low-pass filtered by the optical modulation transfer function of the human eye and secondly high-pass filtered, up to the spatial cut-off frequency determined by the lowest neural sampling density, by the neural modulation transfer function of the visual pathways. The internal noise is then added before signal interpretation occurs in the brain. The detection is mediated by a local spatially windowed matched filter. The model was extended to include complex stimuli and its applicability to the data was found to be successful. The shape of spatial integration function was similar for non-randomised and randomised simple and complex gratings. However, orientation and/or phase randomised reduced r.m.s contrast sensitivity by a factor of 2. The effect of parameter randomisation on spatial integration was modelled under the assumption that human observers change the observer strategy from cross-correlation (i.e., a matched filter) to auto-correlation detection when uncertainty is introduced to the task. The model described the data accurately.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose. To review the evolution in ocular temperature measurement during the last century and examine the advantages and applications of the latest noncontact techniques. The characteristics and source of ocular surface temperature are also discussed. Methods. The literature was reviewed with regard to progress in human thermometry techniques, the parallel development in ocular temperature measurement, the current use of infrared imaging, and the applications of ocular thermography. Results. It is widely acknowledged that the ability to measure ocular temperature accurately will increase the understanding of ocular physiology. There is a characteristic thermal profile across the anterior eye, in which the central area appears coolest. Ocular surface temperature is affected by many factors, including inflammation. In thermometry of the human eye, contact techniques have largely been superseded by infrared imaging, providing a noninvasive and potentially more accurate method of temperature measurement. Ocular thermography requires high resolution and frame rate: features found in the latest generation of cameras. Applications have included dry eye, contact lens wear, corneal sensitivity, and refractive surgery. Conclusions. Interest in the temperature of the eye spans almost 130 years. It has been an area of research largely driven by prevailing technology. Current instrumentation offers the potential to measure ocular surface temperature with more accuracy, resolution, and speed than previously possible. The use of dynamic ocular thermography offers great opportunities for monitoring the temperature of the anterior eye. © 2005 Contact Lens Association of Ophthalmologists, Inc.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The emergence of digital imaging and of digital networks has made duplication of original artwork easier. Watermarking techniques, also referred to as digital signature, sign images by introducing changes that are imperceptible to the human eye but easily recoverable by a computer program. Usage of error correcting codes is one of the good choices in order to correct possible errors when extracting the signature. In this paper, we present a scheme of error correction based on a combination of Reed-Solomon codes and another optimal linear code as inner code. We have investigated the strength of the noise that this scheme is steady to for a fixed capacity of the image and various lengths of the signature. Finally, we compare our results with other error correcting techniques that are used in watermarking. We have also created a computer program for image watermarking that uses the newly presented scheme for error correction.