37 resultados para Visual-system Model
em Universidad Politécnica de Madrid
Resumo:
This article presents the model of a multi-agent system (SMAF), which objectives are the input of fuzzy incidents as the human experts express them with different severities degrees and the further search and suggestion of solutions. The solutions will be later confirm or not by the users. This model was designed, implemented and tested in the telecommunications field, with heterogeneous agents in a cooperative model. In the design, different abstract levels where considered, according to the agents? objectives, their ways to carry it out and the environment in which they act. Each agent is modeled with different spectrum of the knowledge base
Resumo:
The main objective of this work is to present a way to emulate some functions of the mammalian visual system and a model to analyze subjective sensations and visual illusions
Resumo:
La medida de calidad de vídeo sigue siendo necesaria para definir los criterios que caracterizan una señal que cumpla los requisitos de visionado impuestos por el usuario. Las nuevas tecnologías, como el vídeo 3D estereoscópico o formatos más allá de la alta definición, imponen nuevos criterios que deben ser analizadas para obtener la mayor satisfacción posible del usuario. Entre los problemas detectados durante el desarrollo de esta tesis doctoral se han determinado fenómenos que afectan a distintas fases de la cadena de producción audiovisual y tipo de contenido variado. En primer lugar, el proceso de generación de contenidos debe encontrarse controlado mediante parámetros que eviten que se produzca el disconfort visual y, consecuentemente, fatiga visual, especialmente en lo relativo a contenidos de 3D estereoscópico, tanto de animación como de acción real. Por otro lado, la medida de calidad relativa a la fase de compresión de vídeo emplea métricas que en ocasiones no se encuentran adaptadas a la percepción del usuario. El empleo de modelos psicovisuales y diagramas de atención visual permitirían ponderar las áreas de la imagen de manera que se preste mayor importancia a los píxeles que el usuario enfocará con mayor probabilidad. Estos dos bloques se relacionan a través de la definición del término saliencia. Saliencia es la capacidad del sistema visual para caracterizar una imagen visualizada ponderando las áreas que más atractivas resultan al ojo humano. La saliencia en generación de contenidos estereoscópicos se refiere principalmente a la profundidad simulada mediante la ilusión óptica, medida en términos de distancia del objeto virtual al ojo humano. Sin embargo, en vídeo bidimensional, la saliencia no se basa en la profundidad, sino en otros elementos adicionales, como el movimiento, el nivel de detalle, la posición de los píxeles o la aparición de caras, que serán los factores básicos que compondrán el modelo de atención visual desarrollado. Con el objetivo de detectar las características de una secuencia de vídeo estereoscópico que, con mayor probabilidad, pueden generar disconfort visual, se consultó la extensa literatura relativa a este tema y se realizaron unas pruebas subjetivas preliminares con usuarios. De esta forma, se llegó a la conclusión de que se producía disconfort en los casos en que se producía un cambio abrupto en la distribución de profundidades simuladas de la imagen, aparte de otras degradaciones como la denominada “violación de ventana”. A través de nuevas pruebas subjetivas centradas en analizar estos efectos con diferentes distribuciones de profundidades, se trataron de concretar los parámetros que definían esta imagen. Los resultados de las pruebas demuestran que los cambios abruptos en imágenes se producen en entornos con movimientos y disparidades negativas elevadas que producen interferencias en los procesos de acomodación y vergencia del ojo humano, así como una necesidad en el aumento de los tiempos de enfoque del cristalino. En la mejora de las métricas de calidad a través de modelos que se adaptan al sistema visual humano, se realizaron también pruebas subjetivas que ayudaron a determinar la importancia de cada uno de los factores a la hora de enmascarar una determinada degradación. Los resultados demuestran una ligera mejora en los resultados obtenidos al aplicar máscaras de ponderación y atención visual, los cuales aproximan los parámetros de calidad objetiva a la respuesta del ojo humano. ABSTRACT Video quality assessment is still a necessary tool for defining the criteria to characterize a signal with the viewing requirements imposed by the final user. New technologies, such as 3D stereoscopic video and formats of HD and beyond HD oblige to develop new analysis of video features for obtaining the highest user’s satisfaction. Among the problems detected during the process of this doctoral thesis, it has been determined that some phenomena affect to different phases in the audiovisual production chain, apart from the type of content. On first instance, the generation of contents process should be enough controlled through parameters that avoid the occurrence of visual discomfort in observer’s eye, and consequently, visual fatigue. It is especially necessary controlling sequences of stereoscopic 3D, with both animation and live-action contents. On the other hand, video quality assessment, related to compression processes, should be improved because some objective metrics are adapted to user’s perception. The use of psychovisual models and visual attention diagrams allow the weighting of image regions of interest, giving more importance to the areas which the user will focus most probably. These two work fields are related together through the definition of the term saliency. Saliency is the capacity of human visual system for characterizing an image, highlighting the areas which result more attractive to the human eye. Saliency in generation of 3DTV contents refers mainly to the simulated depth of the optic illusion, i.e. the distance from the virtual object to the human eye. On the other hand, saliency is not based on virtual depth, but on other features, such as motion, level of detail, position of pixels in the frame or face detection, which are the basic features that are part of the developed visual attention model, as demonstrated with tests. Extensive literature involving visual comfort assessment was looked up, and the development of new preliminary subjective assessment with users was performed, in order to detect the features that increase the probability of discomfort to occur. With this methodology, the conclusions drawn confirmed that one common source of visual discomfort was when an abrupt change of disparity happened in video transitions, apart from other degradations, such as window violation. New quality assessment was performed to quantify the distribution of disparities over different sequences. The results confirmed that abrupt changes in negative parallax environment produce accommodation-vergence mismatches derived from the increasing time for human crystalline to focus the virtual objects. On the other side, for developing metrics that adapt to human visual system, additional subjective tests were developed to determine the importance of each factor, which masks a concrete distortion. Results demonstrated slight improvement after applying visual attention to objective metrics. This process of weighing pixels approximates the quality results to human eye’s response.
Resumo:
Sensing systems in living bodies offer a large variety of possible different configurations and philosophies able to be emulated in artificial sensing systems. Motion detection is one of the areas where different animals adopt different solutions and, in most of the cases, these solutions reflect a very sophisticated form. One of them, the mammalian visual system, presents several advantages with respect to the artificial ones. The main objective of this paper is to present a system, based on this biological structure, able to detect motion, its sense and its characteristics. The configuration adopted responds to the internal structure of the mammalian retina, where just five types of cells arranged in five layers are able to differentiate a large number of characteristics of the image impinging onto it. Its main advantage is that the detection of these properties is based purely on its hardware. A simple unit, based in a previous optical logic cell employed in optical computing, is the basis for emulating the different behaviors of the biological neurons. No software is present and, in this way, no possible interference from outside affects to the final behavior. This type of structure is able to work, once the internal configuration is implemented, without any further attention. Different possibilities are present in the architecture to be presented: detection of motion, of its direction and intensity. Moreover, some other characteristics, as symmetry may be obtained.
Resumo:
As it is known, there are five types of neurons in the mammalian retinal layer allowing the detection of several important characteristics of the visual image impinging onto the visual system, namely, photoreceptors, horizontal cells, amacrine, bipolar and ganglion cells. And it is a well known fact too, that the amacrine neuron architecture allows a first detection for objects motion, being the most important retinal cell to this function. We have already studied and simulated the Dowling retina model and we have verified that many complex processes in visual detection is performed with the basis of the amacrine cell synaptic connections. This work will show how this structure may be employed for motion detection
Resumo:
To perceive a coherent environment, incomplete or overlapping visual forms must be integrated into meaningful coherent percepts, a process referred to as ?Gestalt? formation or perceptual completion. Increasing evidence suggests that this process engages oscillatory neuronal activity in a distributed neuronal assembly. A separate line of evidence suggests that Gestalt formation requires top-down feedback from higher order brain regions to early visual cortex. Here we combine magnetoencephalography (MEG) and effective connectivity analysis in the frequency domain to specifically address the effective coupling between sources of oscillatory brain activity during Gestalt formation. We demonstrate that perceptual completion of two-tone ?Mooney? faces induces increased gamma frequency band power (55?71 Hz) in human early visual, fusiform and parietal cortices. Within this distributed neuronal assembly fusiform and parietal gamma oscillators are coupled by forward and backward connectivity during Mooney face perception, indicating reciprocal influences of gamma activity between these higher order visual brain regions. Critically, gamma band oscillations in early visual cortex are modulated by top-down feedback connectivity from both fusiform and parietal cortices. Thus, we provide a mechanistic account of Gestalt perception in which gamma oscillations in feature sensitive and spatial attention-relevant brain regions reciprocally drive one another and convey global stimulus aspects to local processing units at low levels of the sensory hierarchy by top-down feedback. Our data therefore support the notion of inverse hierarchical processing within the visual system underlying awareness of coherent percepts.
Resumo:
A frame-level distortion model based on perceptual features of the human visual system is proposed to improve the performance of unequal error protection strategies and provide better quality of experience to users in Side-by-Side 3D video delivery systems.
Resumo:
Overhead rail current collector systems for railway traction offer certain features, such as low installation height and reduced maintenance, which make them predominantly suitable for use in underground train infrastructures. Due to the increased demands of modern catenary systems and higher running speeds of new vehicles, a more capable design of the conductor rail is needed. A new overhead conductor rail has been developed and its design has been patented [13]. Modern simulation and modelling techniques were used in the development approach. The new conductor rail profile has a dynamic behaviour superior to that of the system currently in use. Its innovative design permits either an increase of catenary support spacing or a higher vehicle running speed. Both options ensure savings in installation or operating costs. The simulation model used to optimise the existing conductor rail profile included both a finite element model of the catenary and a three-dimensional multi-body system model of the pantograph. The contact force that appears between pantograph and catenary was obtained in simulation. A sensitivity analysis of the key parameters that influence in catenary dynamics was carried out, finally leading to the improved design.
Resumo:
Power amplifier supplied with constant supply voltage has very low efficiency in the transmitter. A DC-DC converter in series with a linear regulator can be used to obtain voltage modulation. Since this converter should be able to change the output voltage very fast, a multiphase buck converter with a minimum time control strategy is proposed. To modulate supply voltage of the envelope amplifier, the multiphase converter works with some particular duty cycle (i/n, i=1, 2 ... n, n is the number of phase) to generate discrete output voltages, and in these duty cycles the output current ripple can be completely cancelled. The transition times for the minimum time are pre-calculated and inserted in a look-up table. The theoretical background, the system model that is necessary in order to calculate the transition times and the experimental results obtained with a 4-phase buck prototype are given
Resumo:
Métrica de calidad de video de alta definición construida a partir de ratios de referencia completa. La medida de calidad de video, en inglés Visual Quality Assessment (VQA), es uno de los mayores retos por solucionar en el entorno multimedia. La calidad de vídeo tiene un impacto altísimo en la percepción del usuario final (consumidor) de los servicios sustentados en la provisión de contenidos multimedia y, por tanto, factor clave en la valoración del nuevo paradigma denominado Calidad de la Experiencia, en inglés Quality of Experience (QoE). Los modelos de medida de calidad de vídeo se pueden agrupar en varias ramas según la base técnica que sustenta el sistema de medida, destacando en importancia los que emplean modelos psicovisuales orientados a reproducir las características del sistema visual humano, en inglés Human Visual System, del que toman sus siglas HVS, y los que, por el contrario, optan por una aproximación ingenieril en la que el cálculo de calidad está basado en la extracción de parámetros intrínsecos de la imagen y su comparación. A pesar de los avances recogidos en este campo en los últimos años, la investigación en métricas de calidad de vídeo, tanto en presencia de referencia (los modelos denominados de referencia completa), como en presencia de parte de ella (modelos de referencia reducida) e incluso los que trabajan en ausencia de la misma (denominados sin referencia), tiene un amplio camino de mejora y objetivos por alcanzar. Dentro de ellos, la medida de señales de alta definición, especialmente las utilizadas en las primeras etapas de la cadena de valor que son de muy alta calidad, son de especial interés por su influencia en la calidad final del servicio y no existen modelos fiables de medida en la actualidad. Esta tesis doctoral presenta un modelo de medida de calidad de referencia completa que hemos llamado PARMENIA (PArallel Ratios MEtric from iNtrInsic features Analysis), basado en la ponderación de cuatro ratios de calidad calculados a partir de características intrínsecas de la imagen. Son: El Ratio de Fidelidad, calculado mediante el gradiente morfológico o gradiente de Beucher. El Ratio de Similitud Visual, calculado mediante los puntos visualmente significativos de la imagen a través de filtrados locales de contraste. El Ratio de Nitidez, que procede de la extracción del estadístico de textura de Haralick contraste. El Ratio de Complejidad, obtenido de la definición de homogeneidad del conjunto de estadísticos de textura de Haralick PARMENIA presenta como novedad la utilización de la morfología matemática y estadísticos de Haralick como base de una métrica de medida de calidad, pues esas técnicas han estado tradicionalmente más ligadas a la teledetección y la segmentación de objetos. Además, la aproximación de la métrica como un conjunto ponderado de ratios es igualmente novedosa debido a que se alimenta de modelos de similitud estructural y otros más clásicos, basados en la perceptibilidad del error generado por la degradación de la señal asociada a la compresión. PARMENIA presenta resultados con una altísima correlación con las valoraciones MOS procedentes de las pruebas subjetivas a usuarios que se han realizado para la validación de la misma. El corpus de trabajo seleccionado procede de conjuntos de secuencias validados internacionalmente, de modo que los resultados aportados sean de la máxima calidad y el máximo rigor posible. La metodología de trabajo seguida ha consistido en la generación de un conjunto de secuencias de prueba de distintas calidades a través de la codificación con distintos escalones de cuantificación, la obtención de las valoraciones subjetivas de las mismas a través de pruebas subjetivas de calidad (basadas en la recomendación de la Unión Internacional de Telecomunicaciones BT.500), y la validación mediante el cálculo de la correlación de PARMENIA con estos valores subjetivos, cuantificada a través del coeficiente de correlación de Pearson. Una vez realizada la validación de los ratios y optimizada su influencia en la medida final y su alta correlación con la percepción, se ha realizado una segunda revisión sobre secuencias del hdtv test dataset 1 del Grupo de Expertos de Calidad de Vídeo (VQEG, Video Quality Expert Group) mostrando los resultados obtenidos sus claras ventajas. Abstract Visual Quality Assessment has been so far one of the most intriguing challenges on the media environment. Progressive evolution towards higher resolutions while increasing the quality needed (e.g. high definition and better image quality) aims to redefine models for quality measuring. Given the growing interest in multimedia services delivery, perceptual quality measurement has become a very active area of research. First, in this work, a classification of objective video quality metrics based on their underlying methodologies and approaches for measuring video quality has been introduced to sum up the state of the art. Then, this doctoral thesis describes an enhanced solution for full reference objective quality measurement based on mathematical morphology, texture features and visual similarity information that provides a normalized metric that we have called PARMENIA (PArallel Ratios MEtric from iNtrInsic features Analysis), with a high correlated MOS score. The PARMENIA metric is based on the pooling of different quality ratios that are obtained from three different approaches: Beucher’s gradient, local contrast filtering, and contrast and homogeneity Haralick’s texture features. The metric performance is excellent, and improves the current state of the art by providing a wide dynamic range that make easier to discriminate between very close quality coded sequences, especially for very high bit rates whose quality, currently, is transparent for quality metrics. PARMENIA introduces a degree of novelty against other working metrics: on the one hand, exploits the structural information variation to build the metric’s kernel, but complements the measure with texture information and a ratio of visual meaningful points that is closer to typical error sensitivity based approaches. We would like to point out that PARMENIA approach is the only metric built upon full reference ratios, and using mathematical morphology and texture features (typically used in segmentation) for quality assessment. On the other hand, it gets results with a wide dynamic range that allows measuring the quality of high definition sequences from bit rates of hundreds of Megabits (Mbps) down to typical distribution rates (5-6 Mbps), even streaming rates (1- 2 Mbps). Thus, a direct correlation between PARMENIA and MOS scores are easily constructed. PARMENIA may further enhance the number of available choices in objective quality measurement, especially for very high quality HD materials. All this results come from validation that has been achieved through internationally validated datasets on which subjective tests based on ITU-T BT.500 methodology have been carried out. Pearson correlation coefficient has been calculated to verify the accuracy of PARMENIA and its reliability.
Resumo:
La característica fundamental de la Computación Natural se basa en el empleo de conceptos, principios y mecanismos del funcionamiento de la Naturaleza. La Computación Natural -y dentro de ésta, la Computación de Membranas- surge como una posible alternativa a la computación clásica y como resultado de la búsqueda de nuevos modelos de computación que puedan superar las limitaciones presentes en los modelos convencionales. En concreto, la Computación de Membranas se originó como un intento de formular un nuevo modelo computacional inspirado en la estructura y el funcionamiento de las células biológicas: los sistemas basados en este modelo constan de una estructura de membranas que actúan a la vez como separadores y como canales de comunicación, y dentro de esa estructura se alojan multiconjuntos de objetos que evolucionan de acuerdo a unas determinadas reglas de evolución. Al conjunto de dispositivos contemplados por la Computación de Membranas se les denomina genéricamente como Sistemas P. Hasta el momento los Sistemas P sólo han sido estudiados a nivel teórico y no han sido plenamente implementados ni en medios electrónicos, ni en medios bioquímicos, sólo han sido simulados o parcialmente implementados. Por tanto, la implantación de estos sistemas es un reto de investigación abierto. Esta tesis aborda uno de los problemas que debe ser resuelto para conseguir la implantación de los Sistemas P sobre plataformas hardware. El problema concreto se centra en el modelo de los Sistemas P de Transición y surge de la necesidad de disponer de algoritmos de aplicación de reglas que, independientemente de la plataforma hardware sobre la que se implementen, cumplan los requisitos de ser no deterministas, masivamente paralelos y además su tiempo de ejecución esté estáticamente acotado. Como resultado se ha obtenido un conjunto de algoritmos (tanto para plataformas secuenciales, como para plataformas paralelas) que se adecúan a las diferentes configuraciones de los Sistemas P. ABSTRACT The main feature of Natural Computing is the use of concepts, principles and mechanisms inspired by Nature. Natural Computing and within it, Membrane Computing emerges as an potential alternative to conventional computing and as from the search for new models of computation that may overcome the existing limitations in conventional models. Specifically, Membrane Computing was created to formulate a new computational paradigm inspired by the structure and functioning of biological cells: it consists of a membrane structure, which acts as separators as well as communication channels, and within this structure are stored multisets of objects that evolve according to certain evolution rules. The set of computing devices addressed by Membrane Computing are generically known P systems. Up to now, no P systems have been fully implemented yet in electronic or biochemical means. They only have been studied in theory, simulated or partially implemented. Therefore, the implementation of these systems is an open research challenge. This thesis addresses one of the problems to be solved in order to deploy P systems on hardware platforms. This specific problem is focused on the Transition P System model and emerges from the need of providing application rules algorithms that independently on the hardware platform on which they are implemented, meets the requirements of being nondeterministic, massively parallel and runtime-bounded. As a result, this thesis has developed a set of algorithms for both platforms, sequential and parallel, adapted to all possible configurations of P systems.
Resumo:
After a short personal view of the first years of the photonics in Spain, some references about its present situation are given. As a possible future, the first steps towards a Photonics based on the study of the employed mechanisms in the visual system of the living beings are presented.
Resumo:
A novel scheme for depth sequences compression, based on a perceptual coding algorithm, is proposed. A depth sequence describes the object position in the 3D scene, and is used, in Free Viewpoint Video, for the generation of synthetic video sequences. In perceptual video coding the human visual system characteristics are exploited to improve the compression efficiency. As depth sequences are never shown, the perceptual video coding, assessed over them, is not effective. The proposed algorithm is based on a novel perceptual rate distortion optimization process, assessed over the perceptual distortion of the rendered views generated through the encoded depth sequences. The experimental results show the effectiveness of the proposed method, able to obtain a very considerable improvement of the rendered view perceptual quality.
Resumo:
Los servicios en red que conocemos actualmente están basados en documentos y enlaces de hipertexto que los relacionan entre sí sin aportar verdadera información acerca de los contenidos que representan. Podría decirse que se trata de “una red diseñada por personas para ser interpretada por personas”. El objetivo principal de los últimos años es encaminar esta red hacia una web de conocimiento, en la que la información pueda ser interpretada por agentes computerizados de manera automática. Para llevar a cabo esta transformación es necesaria la utilización de nuevas tecnologías especialmente diseñadas para la descripción de contenidos como son las ontologías. Si bien las redes convencionales están evolucionando, no son las únicas que lo están haciendo. El rápido crecimiento de las redes de sensores y el importante aumento en el número de dispositivos conectados a internet, hace necesaria la incorporación de tecnologías de la web semántica a este tipo de redes. Para la realización de este Proyecto de Fin de Carrera se utilizará la ontología SSN, diseñada para la descripción semántica de sensores y las redes de las que forman parte con el fin de permitir una mejor interacción entre los dispositivos y los sistemas que hacen uso de ellos. El trabajo desarrollado a lo largo de este Proyecto de Fin de Carrera gira en torno a esta ontología, siendo el principal objetivo la generación semiautomática de código a partir de un modelo de sistemas descrito en función de las clases y propiedades proporcionadas por SSN. Para alcanzar este fin se dividirá el proyecto en varias partes. Primero se realizará un análisis de la ontología mencionada. A continuación se describirá un sistema simulado de sensores y por último se implementarán las aplicaciones para la generación automática de interfaces y la representación gráfica de los dispositivos del sistema a partir de la representación del éste en un fichero de tipo OWL. ABSTRACT. The web we know today is based on documents and hypertext links that relate these documents with each another, without providing consistent information about the contents they represent. It could be said that its a network designed by people to be used by people. The main goal of the last couple of years is to guide this network into a web of knowledge, where information can be automatically processed by machines. This transformation, requires the use of new technologies specially designed for content description such as ontologies. Nowadays, conventional networks are not the only type of networks evolving. The use of sensor networks and the number of sensor devices connected to the Internet is rapidly increasing, making the use the integration of semantic web technologies to this kind of networks completely necessary. The SSN ontology will be used for the development of this Final Degree Dissertation. This ontology was design to semantically describe sensors and the networks theyre part of, allowing a better interaction between devices and the systems that use them. The development carried through this Final Degree Dissertation revolves around this ontology and aims to achieve semiautomatic code generation starting from a system model described based on classes and properties provided by SSN. To reach this goal, de Dissertation will be divided in several parts. First, an analysis about the mentioned ontology will be made. Following this, a simulated sensor system will be described, and finally, the implementation of the applications will take place. One of these applications will automatically generate de interfaces and the other one will graphically represents the devices in the sensor system, making use of the system representation in an OWL file.
Resumo:
Assessing video quality is a complex task. While most pixel-based metrics do not present enough correlation between objective and subjective results, algorithms need to correspond to human perception when analyzing quality in a video sequence. For analyzing the perceived quality derived from concrete video artifacts in determined region of interest we present a novel methodology for generating test sequences which allow the analysis of impact of each individual distortion. Through results obtained after subjective assessment it is possible to create psychovisual models based on weighting pixels belonging to different regions of interest distributed by color, position, motion or content. Interesting results are obtained in subjective assessment which demonstrates the necessity of new metrics adapted to human visual system.