904 resultados para 2D triangular meshes


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Automatic 2D-to-3D conversion is an important application for filling the gap between the increasing number of 3D displays and the still scant 3D content. However, existing approaches have an excessive computational cost that complicates its practical application. In this paper, a fast automatic 2D-to-3D conversion technique is proposed, which uses a machine learning framework to infer the 3D structure of a query color image from a training database with color and depth images. Assuming that photometrically similar images have analogous 3D structures, a depth map is estimated by searching the most similar color images in the database, and fusing the corresponding depth maps. Large databases are desirable to achieve better results, but the computational cost also increases. A clustering-based hierarchical search using compact SURF descriptors to characterize images is proposed to drastically reduce search times. A significant computational time improvement has been obtained regarding other state-of-the-art approaches, maintaining the quality results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A series of motion compensation algorithms is run on the challenge data including methods that optimize only a linear transformation, or a non-linear transformation, or both – first a linear and then a non-linear transformation. Methods that optimize a linear transformation run an initial segmentation of the area of interest around the left myocardium by means of an independent component analysis (ICA) (ICA-*). Methods that optimize non-linear transformations may run directly on the full images, or after linear registration. Non-linear motion compensation approaches applied include one method that only registers pairs of images in temporal succession (SERIAL), one method that registers all image to one common reference (AllToOne), one method that was designed to exploit quasi-periodicity in free breathing acquired image data and was adapted to also be usable to image data acquired with initial breath-hold (QUASI-P), a method that uses ICA to identify the motion and eliminate it (ICA-SP), and a method that relies on the estimation of a pseudo ground truth (PG) to guide the motion compensation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Apples can be considered as having a complex system formed by several structures at different organization levels: macroscale (>100 μm) and microscale (<100 μm). This work implements 2D T1/T2 global and localized relaxometry sequences on whole apples to be able to perform an intensive non-destructive and non-invasive microstructure study. The 2D T1/T2 cross-correlation spectroscopy allows the extraction of quantitative information about the water compartmentation in different subcellular organelles. A clear difference is found as sound apples show neat peaks for water in different subcellular compartments, such as vacuolar, cytoplasmatic and extracellular water, while in watercore-affected tissues such compartments appear merged. Localized relaxometry allows for the predefinition of slices in order to understand the microstructure of a particular region of the fruit, providing information that cannot be derived from global 2D T1/T2 relaxometry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

1D and 2D patterning of uncharged micro- and nanoparticles via dielectrophoretic forces on photovoltaic z-cut Fe:LiNbO3 have been investigated for the first time. The technique has been successfully applied with dielectric micro-particles of CaCO3 (diameter d = 1-3 μm) and metal nanoparticles of Al (d = 70 nm). At difference with previous experiments in x- and y-cut, the obtained patterns locally reproduce the light distribution with high fidelity. A simple model is provided to analyse the trapping process. The results show the remarkably good capabilities of this geometry for high quality 2D light-induced dielectrophoretic patterning overcoming the important limitations presented by previous configurations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An automatic machine learning strategy for computing the 3D structure of monocular images from a single image query using Local Binary Patterns is presented. The 3D structure is inferred through a training set composed by a repository of color and depth images, assuming that images with similar structure present similar depth maps. Local Binary Patterns are used to characterize the structure of the color images. The depth maps of those color images with a similar structure to the query image are adaptively combined and filtered to estimate the final depth map. Using public databases, promising results have been obtained outperforming other state-of-the-art algorithms and with a computational cost similar to the most efficient 2D-to-3D algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes the dielectrophoretic potential created by the evanescent electric field acting on a particle near a photovoltaic crystal surface depending on the crystal cut. This electric field is obtained from the steady state solution of the Kukhtarev equations for the photovoltaic effect, where the diffusion term has been disregarded. First, the space charge field generated by a small, square, light spot where d _ l (being d a side of the square and l the crystal thickness) is studied. The surface charge density generated in both geometries is calculated and compared as their relation determines the different properties of the dielectrophoretic potential for both cuts. The shape of the dielectrophoretic potential is obtained and compared for several distances to the sample. Afterwards other light patterns are studied by the superposition of square spots, and the resulting trapping profiles are analysed. Finally the surface charge densities and trapping profiles for different d/l relations are studied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Desde su aparición en los años 50, los videojuegos han estado muy presentes en la vida de los jóvenes. Un mercado que nunca se ha mantenido estático, donde la revolución científica vivida en el siglo XX ha colaborado en su gran evolución. Empezando por un juego implementado en una máquina de rayos catódicos, pasando por las recreativas, hasta meterse en los hogares de cualquier familia con los PC y videoconsolas. Contando además con una calidad gráfica que hace difícil diferenciar hoy en día entre realidad y mundo virtual. Pero no sólo los videojuegos están concebidos para el ocio y desconectar de la realidad. Existe un concepto llamado “gamification”, una técnica que busca el transmitir conocimientos, ayudar a personas con discapacidades, facilitar las rehabilitaciones, etc… transformando actividades que puedan parecer aburridas, en algo divertido. Aquí es donde entran los juegos serios, juegos que buscan tales objetivos. Dado que los videojuegos hoy en día están muy metidos en la sociedad, y cada vez llegan a más público de distintas edades y género. Hemos elegido este proyecto para fomentar la creación de videojuegos serios, a través de un tutorial completo con el motor gráfico Unity, intentando usar el mayor número de herramientas posible. ABSTRACT. From the beginning in the 1950s, video games have played an important role in young people’s life. A market that has never been static, where scientific revolution lived in the twentieth century has contributed to a great evolution. Starting from a game implemented on a cathode ray machine, reaching arcade rooms and then getting into families with PCs and consoles. Nowadays the graphic quality makes difficult to differentiate between reality and the virtual world. Not only video games are designed for leisure and to disconnect from the real world for a while. There is a concept called “gamification”, a technique which tends to transmit knowledge, give help to people with disabilities, facilitate rehab, etc. Transforming activities, which might seem boring into fun ones. This is where serious games take place, games looking for the goals mentioned. Since nowadays video games are completely set in our society, and increasing a wider public with different ages and gender, this project has been chosen to promote the creation of serious video games, through a complete tutorial with the game engine - Unity - attempting to use as many tools as it can be possible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La medida de calidad de vídeo sigue siendo necesaria para definir los criterios que caracterizan una señal que cumpla los requisitos de visionado impuestos por el usuario. Las nuevas tecnologías, como el vídeo 3D estereoscópico o formatos más allá de la alta definición, imponen nuevos criterios que deben ser analizadas para obtener la mayor satisfacción posible del usuario. Entre los problemas detectados durante el desarrollo de esta tesis doctoral se han determinado fenómenos que afectan a distintas fases de la cadena de producción audiovisual y tipo de contenido variado. En primer lugar, el proceso de generación de contenidos debe encontrarse controlado mediante parámetros que eviten que se produzca el disconfort visual y, consecuentemente, fatiga visual, especialmente en lo relativo a contenidos de 3D estereoscópico, tanto de animación como de acción real. Por otro lado, la medida de calidad relativa a la fase de compresión de vídeo emplea métricas que en ocasiones no se encuentran adaptadas a la percepción del usuario. El empleo de modelos psicovisuales y diagramas de atención visual permitirían ponderar las áreas de la imagen de manera que se preste mayor importancia a los píxeles que el usuario enfocará con mayor probabilidad. Estos dos bloques se relacionan a través de la definición del término saliencia. Saliencia es la capacidad del sistema visual para caracterizar una imagen visualizada ponderando las áreas que más atractivas resultan al ojo humano. La saliencia en generación de contenidos estereoscópicos se refiere principalmente a la profundidad simulada mediante la ilusión óptica, medida en términos de distancia del objeto virtual al ojo humano. Sin embargo, en vídeo bidimensional, la saliencia no se basa en la profundidad, sino en otros elementos adicionales, como el movimiento, el nivel de detalle, la posición de los píxeles o la aparición de caras, que serán los factores básicos que compondrán el modelo de atención visual desarrollado. Con el objetivo de detectar las características de una secuencia de vídeo estereoscópico que, con mayor probabilidad, pueden generar disconfort visual, se consultó la extensa literatura relativa a este tema y se realizaron unas pruebas subjetivas preliminares con usuarios. De esta forma, se llegó a la conclusión de que se producía disconfort en los casos en que se producía un cambio abrupto en la distribución de profundidades simuladas de la imagen, aparte de otras degradaciones como la denominada “violación de ventana”. A través de nuevas pruebas subjetivas centradas en analizar estos efectos con diferentes distribuciones de profundidades, se trataron de concretar los parámetros que definían esta imagen. Los resultados de las pruebas demuestran que los cambios abruptos en imágenes se producen en entornos con movimientos y disparidades negativas elevadas que producen interferencias en los procesos de acomodación y vergencia del ojo humano, así como una necesidad en el aumento de los tiempos de enfoque del cristalino. En la mejora de las métricas de calidad a través de modelos que se adaptan al sistema visual humano, se realizaron también pruebas subjetivas que ayudaron a determinar la importancia de cada uno de los factores a la hora de enmascarar una determinada degradación. Los resultados demuestran una ligera mejora en los resultados obtenidos al aplicar máscaras de ponderación y atención visual, los cuales aproximan los parámetros de calidad objetiva a la respuesta del ojo humano. ABSTRACT Video quality assessment is still a necessary tool for defining the criteria to characterize a signal with the viewing requirements imposed by the final user. New technologies, such as 3D stereoscopic video and formats of HD and beyond HD oblige to develop new analysis of video features for obtaining the highest user’s satisfaction. Among the problems detected during the process of this doctoral thesis, it has been determined that some phenomena affect to different phases in the audiovisual production chain, apart from the type of content. On first instance, the generation of contents process should be enough controlled through parameters that avoid the occurrence of visual discomfort in observer’s eye, and consequently, visual fatigue. It is especially necessary controlling sequences of stereoscopic 3D, with both animation and live-action contents. On the other hand, video quality assessment, related to compression processes, should be improved because some objective metrics are adapted to user’s perception. The use of psychovisual models and visual attention diagrams allow the weighting of image regions of interest, giving more importance to the areas which the user will focus most probably. These two work fields are related together through the definition of the term saliency. Saliency is the capacity of human visual system for characterizing an image, highlighting the areas which result more attractive to the human eye. Saliency in generation of 3DTV contents refers mainly to the simulated depth of the optic illusion, i.e. the distance from the virtual object to the human eye. On the other hand, saliency is not based on virtual depth, but on other features, such as motion, level of detail, position of pixels in the frame or face detection, which are the basic features that are part of the developed visual attention model, as demonstrated with tests. Extensive literature involving visual comfort assessment was looked up, and the development of new preliminary subjective assessment with users was performed, in order to detect the features that increase the probability of discomfort to occur. With this methodology, the conclusions drawn confirmed that one common source of visual discomfort was when an abrupt change of disparity happened in video transitions, apart from other degradations, such as window violation. New quality assessment was performed to quantify the distribution of disparities over different sequences. The results confirmed that abrupt changes in negative parallax environment produce accommodation-vergence mismatches derived from the increasing time for human crystalline to focus the virtual objects. On the other side, for developing metrics that adapt to human visual system, additional subjective tests were developed to determine the importance of each factor, which masks a concrete distortion. Results demonstrated slight improvement after applying visual attention to objective metrics. This process of weighing pixels approximates the quality results to human eye’s response.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the context of 3D reconstruction, we present a static multi-texturing system yielding a seamless texture atlas calculated by combining the colour information from several photos from the same subject covering most of its surface. These pictures can be provided by shooting just one camera several times when reconstructing a static object, or a set of synchronized cameras, when dealing with a human or any other moving object. We suppress the colour seams due to image misalignments and irregular lighting conditions that multi-texturing approaches typically suffer from, while minimizing the blurring effect introduced by colour blending techniques. Our system is robust enough to compensate for the almost inevitable inaccuracies of 3D meshes obtained with visual hull–based techniques: errors in silhouette segmentation, inherently bad handling of concavities, etc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a Finite Element (FE) analysis of elastic solids several items are usually considered, namely, type and shape of the elements, number of nodes per element, node positions, FE mesh, total number of degrees of freedom (dot) among others. In this paper a method to improve a given FE mesh used for a particular analysis is described. For the improvement criterion different objective functions have been chosen (Total potential energy and Average quadratic error) and the number of nodes and dof's of the new mesh remain constant and equal to the initial FE mesh. In order to find the mesh producing the minimum of the selected objective function the steepest descent gradient technique has been applied as optimization algorithm. However this efficient technique has the drawback that demands a large computation power. Extensive application of this methodology to different 2-D elasticity problems leads to the conclusion that isometric isostatic meshes (ii-meshes) produce better results than the standard reasonably initial regular meshes used in practice. This conclusion seems to be independent on the objective function used for comparison. These ii-meshes are obtained by placing FE nodes along the isostatic lines, i.e. curves tangent at each point to the principal direction lines of the elastic problem to be solved and they should be regularly spaced in order to build regular elements. That means ii-meshes are usually obtained by iteration, i.e. with the initial FE mesh the elastic analysis is carried out. By using the obtained results of this analysis the net of isostatic lines can be drawn and in a first trial an ii-mesh can be built. This first ii-mesh can be improved, if it necessary, by analyzing again the problem and generate after the FE analysis the new and improved ii-mesh. Typically, after two first tentative ii-meshes it is sufficient to produce good FE results from the elastic analysis. Several example of this procedure are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Schrödinger’s equation of a three-body system is a linear partial differential equation (PDE) defined on the 9-dimensional configuration space, ℝ9, naturally equipped with Jacobi’s kinematic metric and with translational and rotational symmetries. The natural invariance of Schrödinger’s equation with respect to the translational symmetry enables us to reduce the configuration space to that of a 6-dimensional one, while that of the rotational symmetry provides the quantum mechanical version of angular momentum conservation. However, the problem of maximizing the use of rotational invariance so as to enable us to reduce Schrödinger’s equation to corresponding PDEs solely defined on triangular parameters—i.e., at the level of ℝ6/SO(3)—has never been adequately treated. This article describes the results on the orbital geometry and the harmonic analysis of (SO(3),ℝ6) which enable us to obtain such a reduction of Schrödinger’s equation of three-body systems to PDEs solely defined on triangular parameters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quais propriedades magnéticas são modificadas quando se agrupam átomos de Fe/Co para formar estruturas quasi-2D, se comparadas aos nanofios (quasi-1D) de FexCo1-x? E como estas propriedades reagem com a variação da proporção de Fe/Co nos aglomerados? A fim de responder a estas questões, trímeros de FexCo1-x depositados em Pt(111) são investigados utilizando o método de primeiros princípios Real Space-Linear Muffin-Tin Orbital-Atomic Sphere Approximation (RS-LMTO-ASA) no âmbito da Teoria do Funcional da Densidade (DFT). Diferentes configurações de trímeros triangulares são consideradas, variando-se as posições e a concentração dos átomos de Fe/Co. Neste trabalho, demonstra-se a ocorrência de uma tendência não-linear estritamente decrescente dos momentos orbitais médios como função da concentração de Fe, distinta do encontrado tanto para os nanofios de FexCo1-x (dependência linear) quanto para a monocamada correspondente (dependência não-linear). Os resultados obtidos mostram ainda que os momentos orbitais variam com o ambiente local e com a direção de magnetização, especialmente quando associados aos átomos de Co, em concordância com publicações anteriores. A mudança de dimensionalidade quasi-1D (nanofios) para quasi-2D (trímeros compactos) não afeta o comportamento dos momentos de spin, que permanecem descritos por uma função linear com respeito à proporção de Fe/Co. Ambos o formato e a concentração de Fe nos sistemas apresentam um papel importante nos valores de energia de anisotropia magnética. Em adição, observou-se que o subtrato de Pt opera ativamente na definição das propriedades magnéticas dos aglomerados. Embora todas as configurações lineares e compactas dos aglomerados de FexCo1-x sejam estáveis e exibam interações fortemente ferromagnéticas entre os primeiros vizinhos, nem todas revelaram o ordenamento colinear como estado fundamental, apresentando uma interação de Dzyaloshinskii-Moriya não-desprezível induzida pelo acoplamento spin-órbita. Estes casos específicos são: o trímero triangular de Co puro e o trímero linear (nanofio) de Fe puro, para o qual foi verificado o acoplamento do tipo Ruderman-Kittel-Kasuya-Yosida entre os átomos de Fe constituintes. Os resultados obtidos contribuem para o entendimento de quais mecanismos definem o magnetismo nos trímeros de FexCo1-x/Pt(111), e discutem as questões presentes atualmente na literatura no contexto destes sistemas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

De modo a satisfazer aspectos de resistência, custo ou conforto, o aperfeiçoamento do desempenho das estruturas é uma meta sempre almejada na Engenharia. Melhorias têm sido alcançadas dado ao crescente uso de materiais compósitos, pois estes apresentam propriedades físicas diferenciadas capazes de atender as necessidades de projeto. Associado ao emprego de compósitos, o estudo da plasticidade demonstra uma interessante alternativa para aumentar o desempenho estrutural ao conferir uma capacidade resistente adicional ao conjunto. Entretanto, alguns problemas podem ser encontrados na análise elastoplástica de compósitos, além das próprias dificuldades inerentes à incorporação de fibras na matriz, no caso de compósitos reforçados. A forma na qual um compósito reforçado por fibras e suas fases têm sua representação e simulação é de extrema importância para garantir que os resultados obtidos sejam compatíveis com a realidade. À medida que se desenvolvem modelos mais refinados, surgem problemas referentes ao custo computacional, além da necessidade de compatibilização dos graus de liberdade entre os nós das malhas de elementos finitos da matriz e do reforço, muitas vezes exigindo a coincidência das referidas malhas. O presente trabalho utiliza formulações que permitem a representação de compósitos reforçados com fibras sem que haja a necessidade de coincidência entre malhas. Além disso, este permite a simulação do meio e do reforço em regime elastoplástico com o objetivo de melhor estudar o real comportamento. O modelo constitutivo adotado para a plasticidade é o de von Mises 2D associativo com encruamento linear positivo e a solução deste modelo foi obtida através de um processo iterativo. A formulação de elementos finitos posicional é adotada com descrição Lagrangeana Total e apresenta as posições do corpo no espaço como parâmetros nodais. Com o intuito de averiguar a correta implementação das formulações consideradas, exemplos para validação e apresentação das funcionalidades do código computacional desenvolvido foram analisados.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O método dos elementos finitos é o método numérico mais difundido na análise de estruturas. Ao longo das últimas décadas foram formulados inúmeros elementos finitos para análise de cascas e placas. As formulações de elementos finitos lidam bem com o campo de deslocamentos, mas geralmente faltam testes que possam validar os resultados obtidos para o campo das tensões. Este trabalho analisa o elemento finito T6-3i, um elemento finito triangular de seis nós proposto dentro de uma formulação geometricamente exata, em relação aos seus resultados de tensões, comparando-os com as teorias analíticas de placas, resultados de tabelas para o cálculo de momentos em placas retangulares e do ANSYSr, um software comercial para análise estrutural, mostrando que o T6-3i pode apresentar resultados insatisfatórios. Na segunda parte deste trabalho, as potencialidades do T6-3i são expandidas, sendo proposta uma formulação dinâmica para análise não linear de cascas. Utiliza-se um modelo Lagrangiano atualizado e a forma fraca é obtida do Teorema dos Trabalhos Virtuais. São feitas simulações numéricas da deformação de domos finos que apresentam vários snap-throughs e snap-backs, incluindo domos com vincos curvos, mostrando a robustez, simplicidade e versatilidade do elemento na sua formulação e na geração das malhas não estruturadas necessárias para as simulações.