960 resultados para Noisy 3D data
Resumo:
Die Steuerung des Fahrerlosen Transportfahrzeuges „FiFi“ erfolgt berührungslos durch Gesten- und Personenerkennung basierend auf 3D-Daten der Umgebung. Die genutzten Verfahren zur Personenerkennung führen in einigen Fällen zur Falsch-Erkennung von Personen in Objekten. Das Paper beschreibt die Ursachen der Fehlerkennung und stellt die umgesetzten Lösungsansätze zur Vermeidung vor. Experimente bestätigen, dass die entwickelten Verfahren die Robustheit des Systems erhöhen.
Resumo:
Mesoscopic 3D imaging has become a widely used optical imaging technique to visualize intact biological specimens. Selective plane illumination microscopy (SPIM) visualizes samples up to a centimeter in size with micrometer resolution by 3D data stitching but is limited to fluorescent contrast. Optical projection tomography (OPT) works with fluorescent and nonfluorescent contrasts, but its resolution is limited in large samples. We present a hybrid setup (OPTiSPIM) combining the advantages of each technique. The combination of fluorescent and nonfluorescent high-resolution 3D data into integrated datasets enables a more extensive representation of mesoscopic biological samples. The modular concept of the OPTiSPIM facilitates incorporation of the transmission OPT modality into already established light sheet based imaging setups.
Resumo:
Morphogenesis emerges from complex multiscale interactions between genetic and mechanical processes. To understand these processes, the evolution of cell shape, proliferation and gene expression must be quantified. This quantification is usually performed either in full 3D, which is computationally expensive and technically challenging, or on 2D planar projections, which introduces geometrical artifacts on highly curved organs. Here we present MorphoGraphX (www.MorphoGraphX.org), a software that bridges this gap by working directly with curved surface images extracted from 3D data. In addition to traditional 3D image analysis, we have developed algorithms to operate on curved surfaces, such as cell segmentation, lineage tracking and fluorescence signal quantification. The software’s modular design makes it easy to include existing libraries, or to implement new algorithms. Cell geometries extracted with MorphoGraphX can be exported and used as templates for simulation models, providing a powerful platform to investigate the interactions between shape, genes and growth.DOI: http://dx.doi.org/10.7554/eLife.05864.001Author keywordsResearch organism
Resumo:
PURPOSE Digital developments have led to the opportunity to compose simulated patient models based on three-dimensional (3D) skeletal, facial, and dental imaging. The aim of this systematic review is to provide an update on the current knowledge, to report on the technical progress in the field of 3D virtual patient science, and to identify further research needs to accomplish clinical translation. MATERIALS AND METHODS Searches were performed electronically (MEDLINE and OVID) and manually up to March 2014 for studies of 3D fusion imaging to create a virtual dental patient. Inclusion criteria were limited to human studies reporting on the technical protocol for superimposition of at least two different 3D data sets and medical field of interest. RESULTS Of the 403 titles originally retrieved, 51 abstracts and, subsequently, 21 full texts were selected for review. Of the 21 full texts, 18 studies were included in the systematic review. Most of the investigations were designed as feasibility studies. Three different types of 3D data were identified for simulation: facial skeleton, extraoral soft tissue, and dentition. A total of 112 patients were investigated in the development of 3D virtual models. CONCLUSION Superimposition of data on the facial skeleton, soft tissue, and/or dentition is a feasible technique to create a virtual patient under static conditions. Three-dimensional image fusion is of interest and importance in all fields of dental medicine. Future research should focus on the real-time replication of a human head, including dynamic movements, capturing data in a single step.
Resumo:
OBJECTIVES The aim of this Short Communication was to present a workflow for the superimposition of intraoral scan (IOS), cone-beam computed tomography (CBCT), and extraoral face scan (EOS) creating a 3D virtual dental patient. MATERIAL AND METHODS As a proof-of-principle, full arch IOS, preoperative CBCT, and mimic EOS were taken and superimposed to a unique 3D data pool. The connecting link between the different files was to detect existing teeth as constant landmarks in all three data sets. RESULTS This novel application technique successfully demonstrated the feasibility of building a craniofacial virtual model by image fusion of IOS, CBCT, and EOS under 3D static conditions. CONCLUSIONS The presented application is the first approach that realized the fusion of intraoral and facial surfaces combined with skeletal anatomy imaging. This novel 3D superimposition technique allowed the simulation of treatment planning, the exploration of the patients' expectations, and the implementation as an effective communication tool. The next step will be the development of a real-time 4D virtual patient in motion.
Resumo:
This paper describes the language identification (LID) system developed by the Patrol team for the first phase of the DARPA RATS (Robust Automatic Transcription of Speech) program, which seeks to advance state of the art detection capabilities on audio from highly degraded communication channels. We show that techniques originally developed for LID on telephone speech (e.g., for the NIST language recognition evaluations) remain effective on the noisy RATS data, provided that careful consideration is applied when designing the training and development sets. In addition, we show significant improvements from the use of Wiener filtering, neural network based and language dependent i-vector modeling, and fusion.
Resumo:
Several groups all over the world are researching in several ways to render 3D sounds. One way to achieve this is to use Head Related Transfer Functions (HRTFs). These measurements contain the Frequency Response of the human head and torso for each angle. Some years ago, was only possible to measure these Frequency Responses only in the horizontal plane. Nowadays, several improvements have made possible to measure and use 3D data for this purpose. The problem was that the groups didn't have a standard format file to store the data. That was a problem when a third part wanted to use some different HRTFs for 3D audio rendering. Every of them have different ways to store the data. The Spatially Oriented Format for Acoustics or SOFA was created to provide a solution to this problem. It is a format definition to unify all the previous different ways of storing any kind of acoustics data. At the moment of this project they have defined some basis for the format and some recommendations to store HRTFs. It is actually under development, so several changes could come. The SOFA[1] file format uses a numeric container called netCDF[2], specifically the Enhaced data model described in netCDF 4 that is based on HDF5[3]. The SoundScape Renderer (SSR) is a tool for real-time spatial audio reproduction providing a variety of rendering algorithms. The SSR was developed at the Quality and Usability Lab at TU Berlin and is now further developed at the Institut für Nachrichtentechnik at Universität Rostock [4]. This project is intended to be an introduction to the use of SOFA files, providing a C++ API to manipulate them and adapt the binaural renderer of the SSR for working with the SOFA format. RESUMEN. El SSR (SoundScape Renderer) es un programa que está siendo desarrollado actualmente por la Universität Rostock, y previamente por la Technische Universität Berlin. El SSR es una herramienta diseñada para la reproducción y renderización de audio 2D en tiempo real. Para ello utiliza diversos algoritmos, algunos orientados a sistemas formados por arrays de altavoces en diferentes configuraciones y otros algoritmos diseñados para cascos. El principal objetivo de este proyecto es dotar al SSR de la capacidad de renderizar sonidos binaurales en 3D. Este proyecto está centrado en el binaural renderer del SSR. Este algoritmo se basa en el uso de HRTFs (Head Related Transfer Function). Las HRTFs representan la función de transferencia del sistema formado por la cabeza y el torso del oyente. Esta función es medida desde diferentes ángulos. Con estos datos el binaural renderer puede generar audio en tiempo real simulando la posición de diferentes fuentes. Para poder incluir una base de datos con HRTFs en 3D se ha hecho uso del nuevo formato SOFA (Spatially Oriented Format for Acoustics). Este nuevo formato se encuentra en una fase bastante temprana de su desarrollo. Está pensado para servir como formato estándar para almacenar HRTFs y cualquier otro tipo de medidas acústicas, ya que actualmente cada laboratorio cuenta con su propio formato de almacenamiento y esto hace bastante difícil usar varias bases de datos diferentes en un mismo proyecto. El formato SOFA hace uso del contenedor numérico netCDF, que a su vez esta basado en un contenedor más básico llamado HRTF-5. Para poder incluir el formato SOFA en el binaural renderer del SSR se ha desarrollado una API en C++ para poder crear y leer archivos SOFA con el fin de utilizar los datos contenidos en ellos dentro del SSR.
Resumo:
Comunicación presentada en el X Workshop of Physical Agents, Cáceres, 10-11 septiembre 2009.
Resumo:
In this article, we present a new framework oriented to teach Computer Vision related subjects called JavaVis. It is a computer vision library divided in three main areas: 2D package is featured for classical computer vision processing; 3D package, which includes a complete 3D geometric toolset, is used for 3D vision computing; Desktop package comprises a tool for graphic designing and testing of new algorithms. JavaVis is designed to be easy to use, both for launching and testing existing algorithms and for developing new ones.
Resumo:
Reactive lymph nodes (LNs) are sites where pMHC-loaded dendritic cells (DCs) interact with rare cognate T cells, leading to their clonal expansion. While DC interactions with T cell subsets critically shape the ensuing immune response, surprisingly little is known on their spatial orchestration at physiologically T cell low precursor frequencies. Light sheet fluorescence microscopy and one of its implementations, selective plane illumination microscopy (SPIM), is a powerful method to obtain precise spatial information of entire organs of 0.5-10mm diameter, the size range of murine LNs. Yet, its usefulness for immunological research has thus far not been comprehensively explored. Here, we have tested and defined protocols that preserve fluorescent protein function during lymphoid tissue clearing required for SPIM. Reconstructions of SPIM-generated 3D data sets revealed that calibrated numbers of adoptively transferred T cells and DCs are successfully detected at a single cell level within optically cleared murine LNs. Finally, we define parameters to quantify specific interactions between antigen-specific T cells and pMHC-bearing DCs in murine LNs. In sum, our studies describe the successful application of light sheet fluorescence microscopy to immunologically relevant tissues.
Resumo:
Modeling natural phenomena from 3D information enhances our understanding of the environment. Dense 3D point clouds are increasingly used as highly detailed input datasets. In addition to the capturing techniques of point clouds with LiDAR, low-cost sensors have been released in the last few years providing access to new research fields and facilitating 3D data acquisition for a broader range of applications. This letter presents an analysis of different speleothem features using 3D point clouds acquired with the gaming device Microsoft® Kinect. We compare the Kinect sensor with terrestrial LiDAR reference measurements using the KinFu pipeline for capturing complete 3D objects (< 4m**3). The results demonstrate the suitability of the Kinect to capture flowstone walls and to derive morphometric parameters of cave features. Although the chosen capturing strategy (KinFu) reveals a high correlation (R2=0.92) of stalagmite morphometry along the vertical object axis, a systematic overestimation (22% for radii and 44% for volume) is found. The comparison of flowstone wall datasets predominantly shows low differences (mean of 1 mm with 7 mm standard deviation) of the order of the Kinect depth precision. For both objects the major differences occur at strongly varying and curved surface structures (e.g. with fine concave parts).
Resumo:
We present an algorithm and the associated single-view capture methodology to acquire the detailed 3D shape, bends, and wrinkles of deforming surfaces. Moving 3D data has been difficult to obtain by methods that rely on known surface features, structured light, or silhouettes. Multispectral photometric stereo is an attractive alternative because it can recover a dense normal field from an untextured surface. We show how to capture such data, which in turn allows us to demonstrate the strengths and limitations of our simple frame-to-frame registration over time. Experiments were performed on monocular video sequences of untextured cloth and faces with and without white makeup. Subjects were filmed under spatially separated red, green, and blue lights. Our first finding is that the color photometric stereo setup is able to produce smoothly varying per-frame reconstructions with high detail. Second, when these 3D reconstructions are augmented with 2D tracking results, one can register both the surfaces and relax the homogenous-color restriction of the single-hue subject. Quantitative and qualitative experiments explore both the practicality and limitations of this simple multispectral capture system.
Resumo:
There is an increased need for 3D recording of archaeological sites and digital preservation of their artifacts. Digital photogrammetry with prosumer DSLR cameras is a suitable tool for recording epigraphy in particular, as it allows for the recording of inscribed surfaces with very high accuracy, often better than 2 mm and with only a short time spent in the field. When photogrammetry is fused with other computational photography techniques like panoramic tours and Reflectance Transformation Imaging, a workflow exists to rival traditional LiDARbased methods. The difficulty however, arises in the presentation of 3D data. It requires an enormous amount of storage and enduser sophistication. The proposed solution is to use gameengine technology and high definition virtual tours to provide not only scholars, but also the general public with an uncomplicated interface to interact with the detailed 3D epigraphic data. The site of Stobi, located near Gradsko, in the Former Yugoslav Republic of Macedonia (FYROM) was used as a case study to demonstrate the effectiveness of RTI, photogrammetry and virtual tour imaging working in combination. A selection of nine sets of inscriptions from the archaeological site were chosen to demonstrate the range of application for the techniques. The chosen marble, sandstone and breccia inscriptions are representative of the varying levels of deterioration and degradation of the epigraphy at Stobi, in which both their rates of decay and resulting legibility is varied. This selection includes those which are treated and untreated stones as well as those in situ and those in storage. The selection consists of both Latin and Greek inscriptions with content ranging from temple dedication inscriptions to statue dedications. This combination of 3D modeling techniques presents a cost and time efficient solution to both increase the legibility of severely damaged stones and to digitally preserve the current state of the inscriptions.
Resumo:
The purpose of this paper is to investigate the potential for use of UAVs in underground mines and present a prototype design for a novel autorotating UAV platform for underground 3D data collection.
Resumo:
The main contribution of this thesis is the proposal of novel strategies for the selection of parameters arising in variational models employed for the solution of inverse problems with data corrupted by Poisson noise. In light of the importance of using a significantly small dose of X-rays in Computed Tomography (CT), and its need of using advanced techniques to reconstruct the objects due to the high level of noise in the data, we will focus on parameter selection principles especially for low photon-counts, i.e. low dose Computed Tomography. For completeness, since such strategies can be adopted for various scenarios where the noise in the data typically follows a Poisson distribution, we will show their performance for other applications such as photography, astronomical and microscopy imaging. More specifically, in the first part of the thesis we will focus on low dose CT data corrupted only by Poisson noise by extending automatic selection strategies designed for Gaussian noise and improving the few existing ones for Poisson. The new approaches will show to outperform the state-of-the-art competitors especially in the low-counting regime. Moreover, we will propose to extend the best performing strategy to the hard task of multi-parameter selection showing promising results. Finally, in the last part of the thesis, we will introduce the problem of material decomposition for hyperspectral CT, which data encodes information of how different materials in the target attenuate X-rays in different ways according to the specific energy. We will conduct a preliminary comparative study to obtain accurate material decomposition starting from few noisy projection data.