48 resultados para Image Processing, Visual Prostheses, Visual Information, Artificial Human Vision, Visual Perception


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present an efficient hole filling strategy that improves the quality of the depth maps obtained with the Microsoft Kinect device. The proposed approach is based on a joint-bilateral filtering framework that includes spatial and temporal information. The missing depth values are obtained applying iteratively a joint-bilateral filter to their neighbor pixels. The filter weights are selected considering three different factors: visual data, depth information and a temporal-consistency map. Video and depth data are combined to improve depth map quality in presence of edges and homogeneous regions. Finally, the temporal-consistency map is generated in order to track the reliability of the depth measurements near the hole regions. The obtained depth values are included iteratively in the filtering process of the successive frames and the accuracy of the hole regions depth values increases while new samples are acquired and filtered

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present a depth-color scene modeling strategy for indoors 3D contents generation. It combines depth and visual information provided by a low-cost active depth camera to improve the accuracy of the acquired depth maps considering the different dynamic nature of the scene elements. Accurate depth and color models of the scene background are iteratively built, and used to detect moving elements in the scene. The acquired depth data is continuously processed with an innovative joint-bilateral filter that efficiently combines depth and visual information thanks to the analysis of an edge-uncertainty map and the detected foreground regions. The main advantages of the proposed approach are: removing depth maps spatial noise and temporal random fluctuations; refining depth data at object boundaries, generating iteratively a robust depth and color background model and an accurate moving object silhouette.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increase of multimedia services delivered over packet-based networks has entailed greater quality expectations of the end-users. This has led to an intensive research on techniques for evaluating the quality of experience perceived by the viewers of audiovisual content, considering the different degradations that it could suffer along the broadcasting system. In this paper, a comprehensive study of the impact of transmission errors affecting video and audio in IPTV is presented. With this aim, subjective assessment tests were carried out proposing a novel methodology trying to keep as close as possible home environment viewing conditions. Also 3DTV content in side-by-side format has been used in the experiments to compare the impact of the degradations. The results provide a better understanding of the effects of transmission errors, and show that the QoE related to the first approach of 3DTV is acceptable, but the visual discomfort that it causes should be reduced.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present an innovative system to encode and transmit textured multi-resolution 3D meshes in a progressive way, with no need to send several texture images, one for each mesh LOD (Level Of Detail). All texture LODs are created from the finest one (associated to the finest mesh), but can be re- constructed progressively from the coarsest thanks to refinement images calculated in the encoding process, and transmitted only if needed. This allows us to adjust the LOD/quality of both 3D mesh and texture according to the rendering power of the device that will display them, and to the network capacity. Additionally, we achieve big savings in data transmission by avoiding altogether texture coordinates, which are generated automatically thanks to an unwrapping system agreed upon by both encoder and decoder.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El estudio de materiales, especialmente biológicos, por medios no destructivos está adquiriendo una importancia creciente tanto en las aplicaciones científicas como industriales. Las ventajas económicas de los métodos no destructivos son múltiples. Existen numerosos procedimientos físicos capaces de extraer información detallada de las superficie de la madera con escaso o nulo tratamiento previo y mínima intrusión en el material. Entre los diversos métodos destacan las técnicas ópticas y las acústicas por su gran versatilidad, relativa sencillez y bajo coste. Esta tesis pretende establecer desde la aplicación de principios simples de física, de medición directa y superficial, a través del desarrollo de los algoritmos de decisión mas adecuados basados en la estadística, unas soluciones tecnológicas simples y en esencia, de coste mínimo, para su posible aplicación en la determinación de la especie y los defectos superficiales de la madera de cada muestra tratando, en la medida de lo posible, no alterar su geometría de trabajo. Los análisis desarrollados han sido los tres siguientes: El primer método óptico utiliza las propiedades de la luz dispersada por la superficie de la madera cuando es iluminada por un laser difuso. Esta dispersión produce un moteado luminoso (speckle) cuyas propiedades estadísticas permiten extraer propiedades muy precisas de la estructura tanto microscópica como macroscópica de la madera. El análisis de las propiedades espectrales de la luz laser dispersada genera ciertos patrones mas o menos regulares relacionados con la estructura anatómica, composición, procesado y textura superficial de la madera bajo estudio que ponen de manifiesto características del material o de la calidad de los procesos a los que ha sido sometido. El uso de este tipo de láseres implica también la posibilidad de realizar monitorizaciones de procesos industriales en tiempo real y a distancia sin interferir con otros sensores. La segunda técnica óptica que emplearemos hace uso del estudio estadístico y matemático de las propiedades de las imágenes digitales obtenidas de la superficie de la madera a través de un sistema de scanner de alta resolución. Después de aislar los detalles mas relevantes de las imágenes, diversos algoritmos de clasificacion automatica se encargan de generar bases de datos con las diversas especies de maderas a las que pertenecían las imágenes, junto con los márgenes de error de tales clasificaciones. Una parte fundamental de las herramientas de clasificacion se basa en el estudio preciso de las bandas de color de las diversas maderas. Finalmente, numerosas técnicas acústicas, tales como el análisis de pulsos por impacto acústico, permiten complementar y afinar los resultados obtenidos con los métodos ópticos descritos, identificando estructuras superficiales y profundas en la madera así como patologías o deformaciones, aspectos de especial utilidad en usos de la madera en estructuras. La utilidad de estas técnicas esta mas que demostrada en el campo industrial aun cuando su aplicación carece de la suficiente expansión debido a sus altos costes y falta de normalización de los procesos, lo cual hace que cada análisis no sea comparable con su teórico equivalente de mercado. En la actualidad gran parte de los esfuerzos de investigación tienden a dar por supuesto que la diferenciación entre especies es un mecanismo de reconocimiento propio del ser humano y concentran las tecnologías en la definición de parámetros físicos (módulos de elasticidad, conductividad eléctrica o acústica, etc.), utilizando aparatos muy costosos y en muchos casos complejos en su aplicación de campo. Abstract The study of materials, especially the biological ones, by non-destructive techniques is becoming increasingly important in both scientific and industrial applications. The economic advantages of non-destructive methods are multiple and clear due to the related costs and resources necessaries. There are many physical processes capable of extracting detailed information on the wood surface with little or no previous treatment and minimal intrusion into the material. Among the various methods stand out acoustic and optical techniques for their great versatility, relative simplicity and low cost. This thesis aims to establish from the application of simple principles of physics, surface direct measurement and through the development of the more appropriate decision algorithms based on statistics, a simple technological solutions with the minimum cost for possible application in determining the species and the wood surface defects of each sample. Looking for a reasonable accuracy without altering their work-location or properties is the main objetive. There are three different work lines: Empirical characterization of wood surfaces by means of iterative autocorrelation of laser speckle patterns: A simple and inexpensive method for the qualitative characterization of wood surfaces is presented. it is based on the iterative autocorrelation of laser speckle patterns produced by diffuse laser illumination of the wood surfaces. The method exploits the high spatial frequency content of speckle images. A similar approach with raw conventional photographs taken with ordinary light would be very difficult. A few iterations of the algorithm are necessary, typically three or four, in order to visualize the most important periodic features of the surface. The processed patterns help in the study of surface parameters, to design new scattering models and to classify the wood species. Fractal-based image enhancement techniques inspired by differential interference contrast microscopy: Differential interference contrast microscopy is a very powerful optical technique for microscopic imaging. Inspired by the physics of this type of microscope, we have developed a series of image processing algorithms aimed at the magnification, noise reduction, contrast enhancement and tissue analysis of biological samples. These algorithms use fractal convolution schemes which provide fast and accurate results with a performance comparable to the best present image enhancement algorithms. These techniques can be used as post processing tools for advanced microscopy or as a means to improve the performance of less expensive visualization instruments. Several examples of the use of these algorithms to visualize microscopic images of raw pine wood samples with a simple desktop scanner are provided. Wood species identification using stress-wave analysis in the audible range: Stress-wave analysis is a powerful and flexible technique to study mechanical properties of many materials. We present a simple technique to obtain information about the species of wood samples using stress-wave sounds in the audible range generated by collision with a small pendulum. Stress-wave analysis has been used for flaw detection and quality control for decades, but its use for material identification and classification is less cited in the literature. Accurate wood species identification is a time consuming task for highly trained human experts. For this reason, the development of cost effective techniques for automatic wood classification is a desirable goal. Our proposed approach is fully non-invasive and non-destructive, reducing significantly the cost and complexity of the identification and classification process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract The creation of atlases, or digital models where information from different subjects can be combined, is a field of increasing interest in biomedical imaging. When a single image does not contain enough information to appropriately describe the organism under study, it is then necessary to acquire images of several individuals, each of them containing complementary data with respect to the rest of the components in the cohort. This approach allows creating digital prototypes, ranging from anatomical atlases of human patients and organs, obtained for instance from Magnetic Resonance Imaging, to gene expression cartographies of embryo development, typically achieved from Light Microscopy. Within such context, in this PhD Thesis we propose, develop and validate new dedicated image processing methodologies that, based on image registration techniques, bring information from multiple individuals into alignment within a single digital atlas model. We also elaborate a dedicated software visualization platform to explore the resulting wealth of multi-dimensional data and novel analysis algo-rithms to automatically mine the generated resource in search of bio¬logical insights. In particular, this work focuses on gene expression data from developing zebrafish embryos imaged at the cellular resolution level with Two-Photon Laser Scanning Microscopy. Disposing of quantitative measurements relating multiple gene expressions to cell position and their evolution in time is a fundamental prerequisite to understand embryogenesis multi-scale processes. However, the number of gene expressions that can be simultaneously stained in one acquisition is limited due to optical and labeling constraints. These limitations motivate the implementation of atlasing strategies that can recreate a virtual gene expression multiplex. The developed computational tools have been tested in two different scenarios. The first one is the early zebrafish embryogenesis where the resulting atlas constitutes a link between the phenotype and the genotype at the cellular level. The second one is the late zebrafish brain where the resulting atlas allows studies relating gene expression to brain regionalization and neurogenesis. The proposed computational frameworks have been adapted to the requirements of both scenarios, such as the integration of partial views of the embryo into a whole embryo model with cellular resolution or the registration of anatom¬ical traits with deformable transformation models non-dependent on any specific labeling. The software implementation of the atlas generation tool (Match-IT) and the visualization platform (Atlas-IT) together with the gene expression atlas resources developed in this Thesis are to be made freely available to the scientific community. Lastly, a novel proof-of-concept experiment integrates for the first time 3D gene expression atlas resources with cell lineages extracted from live embryos, opening up the door to correlate genetic and cellular spatio-temporal dynamics. La creación de atlas, o modelos digitales, donde la información de distintos sujetos puede ser combinada, es un campo de creciente interés en imagen biomédica. Cuando una sola imagen no contiene suficientes datos como para describir apropiadamente el organismo objeto de estudio, se hace necesario adquirir imágenes de varios individuos, cada una de las cuales contiene información complementaria respecto al resto de componentes del grupo. De este modo, es posible crear prototipos digitales, que pueden ir desde atlas anatómicos de órganos y pacientes humanos, adquiridos por ejemplo mediante Resonancia Magnética, hasta cartografías de la expresión genética del desarrollo de embrionario, típicamente adquiridas mediante Microscopía Optica. Dentro de este contexto, en esta Tesis Doctoral se introducen, desarrollan y validan nuevos métodos de procesado de imagen que, basándose en técnicas de registro de imagen, son capaces de alinear imágenes y datos provenientes de múltiples individuos en un solo atlas digital. Además, se ha elaborado una plataforma de visualization específicamente diseñada para explorar la gran cantidad de datos, caracterizados por su multi-dimensionalidad, que resulta de estos métodos. Asimismo, se han propuesto novedosos algoritmos de análisis y minería de datos que permiten inspeccionar automáticamente los atlas generados en busca de conclusiones biológicas significativas. En particular, este trabajo se centra en datos de expresión genética del desarrollo embrionario del pez cebra, adquiridos mediante Microscopía dos fotones con resolución celular. Disponer de medidas cuantitativas que relacionen estas expresiones genéticas con las posiciones celulares y su evolución en el tiempo es un prerrequisito fundamental para comprender los procesos multi-escala característicos de la morfogénesis. Sin embargo, el número de expresiones genéticos que pueden ser simultáneamente etiquetados en una sola adquisición es reducido debido a limitaciones tanto ópticas como del etiquetado. Estas limitaciones requieren la implementación de estrategias de creación de atlas que puedan recrear un multiplexado virtual de expresiones genéticas. Las herramientas computacionales desarrolladas han sido validadas en dos escenarios distintos. El primer escenario es el desarrollo embrionario temprano del pez cebra, donde el atlas resultante permite constituir un vínculo, a nivel celular, entre el fenotipo y el genotipo de este organismo modelo. El segundo escenario corresponde a estadios tardíos del desarrollo del cerebro del pez cebra, donde el atlas resultante permite relacionar expresiones genéticas con la regionalización del cerebro y la formación de neuronas. La plataforma computacional desarrollada ha sido adaptada a los requisitos y retos planteados en ambos escenarios, como la integración, a resolución celular, de vistas parciales dentro de un modelo consistente en un embrión completo, o el alineamiento entre estructuras de referencia anatómica equivalentes, logrado mediante el uso de modelos de transformación deformables que no requieren ningún marcador específico. Está previsto poner a disposición de la comunidad científica tanto la herramienta de generación de atlas (Match-IT), como su plataforma de visualización (Atlas-IT), así como las bases de datos de expresión genética creadas a partir de estas herramientas. Por último, dentro de la presente Tesis Doctoral, se ha incluido una prueba conceptual innovadora que permite integrar los mencionados atlas de expresión genética tridimensionales dentro del linaje celular extraído de una adquisición in vivo de un embrión. Esta prueba conceptual abre la puerta a la posibilidad de correlar, por primera vez, las dinámicas espacio-temporales de genes y células.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Digital atlases of animal development provide a quantitative description of morphogenesis, opening the path toward processes modeling. Prototypic atlases offer a data integration framework where to gather information from cohorts of individuals with phenotypic variability. Relevant information for further theoretical reconstruction includes measurements in time and space for cell behaviors and gene expression. The latter as well as data integration in a prototypic model, rely on image processing strategies. Developing the tools to integrate and analyze biological multidimensional data are highly relevant for assessing chemical toxicity or performing drugs preclinical testing. This article surveys some of the most prominent efforts to assemble these prototypes, categorizes them according to salient criteria and discusses the key questions in the field and the future challenges toward the reconstruction of multiscale dynamics in model organisms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we present an optimized fuzzy visual servoing system for obstacle avoidance using an unmanned aerial vehicle. The cross-entropy theory is used to optimise the gains of our controllers. The optimization process was made using the ROS-Gazebo 3D simulation with purposeful extensions developed for our experiments. Visual servoing is achieved through an image processing front-end that uses the Camshift algorithm to detect and track objects in the scene. Experimental flight trials using a small quadrotor were performed to validate the parameters estimated from simulation. The integration of crossentropy methods is a straightforward way to estimate optimal gains achieving excellent results when tested in real flights.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study a cognitive radio scenario in which the network of sec- ondary users wishes to identify which primary user, if any, is trans- mitting. To achieve this, the nodes will rely on some form of location information. In our previous work we proposed two fully distributed algorithms for this task, with and without a pre-detection step, using propagation parameters as the only source of location information. In a real distributed deployment, each node must estimate its own po- sition and/or propagation parameters. Hence, in this work we study the effect of uncertainty, or error in these estimates on the proposed distributed identification algorithms. We show that the pre-detection step significantly increases robustness against uncertainty in nodes' locations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En este proyecto se ha desarrollado un código de MATLAB para el procesamiento de imágenes tomográficas 3D, de muestras de asfalto de carreteras en Polonia. Estas imágenes en 3D han sido tomadas por un equipo de investigación de la Universidad Tecnológica de Lodz (LUT). El objetivo de este proyecto es crear una herramienta que se pueda utilizar para estudiar las diferentes muestras de asfalto 3D y pueda servir para estudiar las pruebas de estrés que experimentan las muestras en el laboratorio. Con el objetivo final de encontrar soluciones a la degradación sufrida en las carreteras de Polonia, debido a diferentes causas, como son las condiciones meteorológicas. La degradación de las carreteras es un tema que se ha investigado desde hace muchos años, debido a la fuerte degradación causada por diferentes factores como son climáticos, la falta de mantenimiento o el tráfico excesivo en algunos casos. Es en Polonia, donde estos tres factores hacen que la composición de muchas carreteras se degrade rápidamente, sobre todo debido a las condiciones meteorológicas sufridas a lo largo del año, con temperaturas que van desde 30° C en verano a -20° C en invierno. Esto hace que la composición de las carreteras sufra mucho y el asfalto se levante, lo que aumenta los costos de mantenimiento y los accidentes de carretera. Este proyecto parte de la base de investigación que se lleva a cabo en la LUT, tratando de mejorar el análisis de las muestras de asfalto, por lo que se realizarán las pruebas de estrés y encontrar soluciones para mejorar el asfalto en las carreteras polacas. Esto disminuiría notablemente el costo de mantenimiento. A pesar de no entrar en aspectos muy técnicos sobre el asfalto y su composición, se ha necesitado realizar un estudio profundo sobre todas sus características, para crear un código capaz de obtener los mejores resultados. Por estas razones, se ha desarrollado en Matlab, los algoritmos que permiten el estudio de los especímenes 3D de asfalto. Se ha utilizado este software, ya que Matlab es una poderosa herramienta matemática que permite operar con matrices para realización de operaciones rápidamente, permitiendo desarrollar un código específico para el tratamiento y procesamiento de imágenes en 3D. Gracias a esta herramienta, estos algoritmos realizan procesos tales como, la segmentación de la imagen 3D, pre y post procesamiento de la imagen, filtrado o todo tipo de análisis microestructural de las muestras de asfalto que se están estudiando. El código presentado para la segmentación de las muestras de asfalto 3D es menos complejo en su diseño y desarrollo, debido a las herramientas de procesamiento de imágenes que incluye Matlab, que facilitan significativamente la tarea de programación, así como el método de segmentación utilizado. Respecto al código, este ha sido diseñado teniendo en cuenta el objetivo de facilitar el trabajo de análisis y estudio de las imágenes en 3D de las muestras de asfalto. Por lo tanto, el principal objetivo es el de crear una herramienta para el estudio de este código, por ello fue desarrollado para que pueda ser integrado en un entorno visual, de manera que sea más fácil y simple su utilización. Ese es el motivo por el cual todos estos algoritmos y funciones, que ha sido desarrolladas, se integrarán en una herramienta visual que se ha desarrollado con el GUIDE de Matlab. Esta herramienta ha sido creada en colaboración con Jorge Vega, y fue desarrollada en su proyecto final de carrera, cuyo título es: Segmentación microestructural de Imágenes en 3D de la muestra de asfalto utilizando Matlab. En esta herramienta se ha utilizado todo las funciones programadas en este proyecto, y tiene el objetivo de desarrollar una herramienta que permita crear un entorno gráfico intuitivo y de fácil uso para el estudio de las muestras de 3D de asfalto. Este proyecto se ha dividido en 4 capítulos, en un primer lugar estará la introducción, donde se presentarán los aspectos más importante que se va a componer el proyecto. En el segundo capítulo se presentarán todos los datos técnicos que se han tenido que estudiar para desarrollar la herramienta, entre los que cabe los tres temas más importantes que se han estudiado en este proyecto: materiales asfálticos, los principios de la tomografías 3D y el procesamiento de imágenes. Esta será la base para el tercer capítulo, que expondrá la metodología utilizada en la elaboración del código, con la explicación del entorno de trabajo utilizado en Matlab y todas las funciones de procesamiento de imágenes utilizadas. Además, se muestra todo el código desarrollado, así como una descripción teórica de los métodos utilizados para el pre-procesamiento y segmentación de las imagenes en 3D. En el capítulo 4, se mostrarán los resultados obtenidos en el estudio de una de las muestras de asfalto, y, finalmente, el último capítulo se basa en las conclusiones sobre el desarrollo de este proyecto. En este proyecto se ha llevado han realizado todos los puntos que se establecieron como punto de partida en el anteproyecto para crear la herramienta, a pesar de que se ha dejado para futuros proyectos nuevas posibilidades de este codigo, como por ejemplo, la detección automática de las diferentes regiones de una muestra de asfalto debido a su composición. Como se muestra en este proyecto, las técnicas de procesamiento de imágenes se utilizan cada vez más en multitud áreas, como pueden ser industriales o médicas. En consecuencia, este tipo de proyecto tiene multitud de posibilidades, y pudiendo ser la base para muchas nuevas aplicaciones que se puedan desarrollar en un futuro. Por último, se concluye que este proyecto ha contribuido a fortalecer las habilidades de programación, ampliando el conocimiento de Matlab y de la teoría de procesamiento de imágenes. Del mismo modo, este trabajo proporciona una base para el desarrollo de un proyecto más amplio cuyo alcance será una herramienta que puedas ser utilizada por el equipo de investigación de la Universidad Tecnológica de Lodz y en futuros proyectos. ABSTRACT In this project has been developed one code in MATLAB to process X-ray tomographic 3D images of asphalt specimens. These images 3D has been taken by a research team of the Lodz University of Technology (LUT). The aim of this project is to create a tool that can be used to study differents asphalt specimen and can be used to study them after stress tests undergoing the samples. With the final goal to find solutions to the degradation suffered roads in Poland due to differents causes, like weather conditions. The degradation of the roads is an issue that has been investigated many years ago, due to strong degradation suffered caused by various factors such as climate, poor maintenance or excessive traffic in some cases. It is in Poland where these three factors make the composition of many roads degrade rapidly, especially due to the weather conditions suffered along the year, with temperatures ranging from 30 o C in summer to -20 ° C in winter. This causes the roads suffers a lot and asphalt rises shortly after putting, increasing maintenance costs and road accident. This project part of the base that research is taking place at the LUT, in order to better analyze the asphalt specimens, they are tested for stress and find solutions to improve the asphalt on Polish roads. This would decrease remarkable maintenance cost. Although this project will not go into the technical aspect as asphalt and composition, but it has been required a deep study about all of its features, to create a code able to obtain the best results. For these reasons, there have been developed in Matlab, algorithms that allow the study of 3D specimens of asphalt. Matlab is a powerful mathematical tool, which allows arrays operate fastly, allowing to develop specific code for the treatment and processing of 3D images. Thus, these algorithms perform processes such as the multidimensional matrix sgementation, pre and post processing with the same filtering algorithms or microstructural analysis of asphalt specimen which being studied. All these algorithms and function that has been developed to be integrated into a visual tool which it be developed with the GUIDE of Matlab. This tool has been created in the project of Jorge Vega which name is: Microstructural segmentation of 3D images of asphalt specimen using Matlab engine. In this tool it has been used all the functions programmed in this project, and it has the aim to develop an easy and intuitive graphical environment for the study of 3D samples of asphalt. This project has been divided into 4 chapters plus the introduction, the second chapter introduces the state-of-the-art of the three of the most important topics that have been studied in this project: asphalt materials, principle of X-ray tomography and image processing. This will be the base for the third chapter, which will outline the methodology used in developing the code, explaining the working environment of Matlab and all the functions of processing images used. In addition, it will be shown all the developed code created, as well as a theoretical description of the methods used for preprocessing and 3D image segmentation. In Chapter 4 is shown the results obtained from the study of one of the specimens of asphalt, and finally the last chapter draws the conclusions regarding the development of this project.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present an adaptive unequal error protection (UEP) strategy built on the 1-D interleaved parity Application Layer Forward Error Correction (AL-FEC) code for protecting the transmission of stereoscopic 3D video content encoded with Multiview Video Coding (MVC) through IP-based networks. Our scheme targets the minimization of quality degradation produced by packet losses during video transmission in time-sensitive application scenarios. To that end, based on a novel packet-level distortion model, it selects in real time the most suitable packets within each Group of Pictures (GOP) to be protected and the most convenient FEC technique parameters, i.e., the size of the FEC generator matrix. In order to make these decisions, it considers the relevance of the packet, the behavior of the channel, and the available bitrate for protection purposes. Simulation results validate both the distortion model introduced to estimate the importance of packets and the optimization of the FEC technique parameter values.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although the delivery of 3D video services to households is nowadays a reality thanks to frame-compatible formats, many efforts are being made to obtain efficient methods to transmit 3D content offering a high quality of experience to the end users. In this paper, a stereoscopic video streaming scenario is considered and the perceptual impact of various strategies applicable to adaptive streaming situations are compared. Specifically, the mechanisms are based on switching between copies of the content with different coding qualities, on discarding frames of the sequence, on switching from 3D to 2D and on using asymmetric coding of the stereo views. In addition, when video freezes happen, the possibility of keeping the end-to-end latency or maintaining the continuity of the video are considered. These aspects were evaluated carrying out a subjective assessment test considering also visual discomfort issues using a methodology designed to keep as far as possible domestic viewing conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Markerless video-based human pose estimation algorithms face a high-dimensional problem that is frequently broken down into several lower-dimensional ones by estimating the pose of each limb separately. However, in order to do so they need to reliably locate the torso, for which they typically rely on time coherence and tracking algorithms. Their losing track usually results in catastrophic failure of the process, requiring human intervention and thus precluding their usage in real-time applications. We propose a very fast rough pose estimation scheme based on global shape descriptors built on 3D Zernike moments. Using an articulated model that we configure in many poses, a large database of descriptor/pose pairs can be computed off-line. Thus, the only steps that must be done on-line are the extraction of the descriptors for each input volume and a search against the database to get the most likely poses. While the result of such process is not a fine pose estimation, it can be useful to help more sophisticated algorithms to regain track or make more educated guesses when creating new particles in particle-filter-based tracking schemes. We have achieved a performance of about ten fps on a single computer using a database of about one million entries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current fusion devices consist of multiple diagnostics and hundreds or even thousands of signals. This situation forces on multiple occasions to use distributed data acquisition systems as the best approach. In this type of distributed systems, one of the most important issues is the synchronization between signals, so that it is possible to have a temporal correlation as accurate as possible between the acquired samples of all channels. In last decades, many fusion devices use different types of video cameras to provide inside views of the vessel during operations and to monitor plasma behavior. The synchronization between each video frame and the rest of the different signals acquired from any other diagnostics is essential in order to know correctly the plasma evolution, since it is possible to analyze jointly all the information having accurate knowledge of their temporal correlation. The developed system described in this paper allows timestamping image frames in a real-time acquisition and processing system using 1588 clock distribution. The system has been implemented using FPGA based devices together with a 1588 synchronized timing card (see Fig.1). The solution is based on a previous system [1] that allows image acquisition and real-time image processing based on PXIe technology. This architecture is fully compatible with the ITER Fast Controllers [2] and offers integration with EPICS to control and monitor the entire system. However, this set-up is not able to timestamp the frames acquired since the frame grabber module does not present any type of timing input (IRIG-B, GPS, PTP). To solve this lack, an IEEE1588 PXI timing device its used to provide an accurate way to synchronize distributed data acquisition systems using the Precision Time Protocol (PTP) IEEE 1588 2008 standard. This local timing device can be connected to a master clock device for global synchronization. The timing device has a buffer timestamp for each PXI trigger line and requires tha- a software application assigns each frame the corresponding timestamp. The previous action is critical and cannot be achieved if the frame rate is high. To solve this problem, it has been designed a solution that distributes the clock from the IEEE 1588 timing card to all FlexRIO devices [3]. This solution uses two PXI trigger lines that provide the capacity to assign timestamps to every frame acquired and register events by hardware in a deterministic way. The system provides a solution for timestamping frames to synchronize them with the rest of the different signals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El objetivo de este proyecto es evaluar la mejora de rendimiento que aporta la paralelización de algoritmos de procesamiento de imágenes, para su ejecución en una tarjeta gráfica. Para ello, una vez seleccionados los algoritmos a estudio, fueron desarrollados en lenguaje C++ bajo el paradigma secuencial. A continuación, tomando como base estas implementaciones, se paralelizaron siguiendo las directivas de la tecnología CUDA (Compute Unified Device Architecture) desarrollada por NVIDIA. Posteriormente, se desarrolló un interfaz gráfico de usuario en Visual C#, para una utilización más sencilla de la herramienta. Por último, se midió el rendimiento de cada uno de los algoritmos, en términos de tiempo de ejecución paralela y speedup, mediante el procesamiento de una serie de imágenes de distintos tamaños.---ABSTRACT---The aim of this Project is to evaluate the performance improvement provided by the parallelization of image processing algorithms, which will be executed on a graphics processing unit. In order to do this, once the algorithms to study were selected, each of them was developed in C++ under sequential paradigm. Then, based on these implementations, these algorithms were implemented using the compute unified device architecture (CUDA) programming model provided by NVIDIA. After that, a graphical user interface (GUI) was developed to increase application’s usability. Finally, performance of each algorithm was measured in terms of parallel execution time and speedup by processing a set of images of different sizes.