871 resultados para 3D virtual models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this project, we propose the implementation of a 3D object recognition system which will be optimized to operate under demanding time constraints. The system must be robust so that objects can be recognized properly in poor light conditions and cluttered scenes with significant levels of occlusion. An important requirement must be met: the system must exhibit a reasonable performance running on a low power consumption mobile GPU computing platform (NVIDIA Jetson TK1) so that it can be integrated in mobile robotics systems, ambient intelligence or ambient assisted living applications. The acquisition system is based on the use of color and depth (RGB-D) data streams provided by low-cost 3D sensors like Microsoft Kinect or PrimeSense Carmine. The range of algorithms and applications to be implemented and integrated will be quite broad, ranging from the acquisition, outlier removal or filtering of the input data and the segmentation or characterization of regions of interest in the scene to the very object recognition and pose estimation. Furthermore, in order to validate the proposed system, we will create a 3D object dataset. It will be composed by a set of 3D models, reconstructed from common household objects, as well as a handful of test scenes in which those objects appear. The scenes will be characterized by different levels of occlusion, diverse distances from the elements to the sensor and variations on the pose of the target objects. The creation of this dataset implies the additional development of 3D data acquisition and 3D object reconstruction applications. The resulting system has many possible applications, ranging from mobile robot navigation and semantic scene labeling to human-computer interaction (HCI) systems based on visual information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the beginning of 3D computer vision problems, the use of techniques to reduce the data to make it treatable preserving the important aspects of the scene has been necessary. Currently, with the new low-cost RGB-D sensors, which provide a stream of color and 3D data of approximately 30 frames per second, this is getting more relevance. Many applications make use of these sensors and need a preprocessing to downsample the data in order to either reduce the processing time or improve the data (e.g., reducing noise or enhancing the important features). In this paper, we present a comparison of different downsampling techniques which are based on different principles. Concretely, five different downsampling methods are included: a bilinear-based method, a normal-based, a color-based, a combination of the normal and color-based samplings, and a growing neural gas (GNG)-based approach. For the comparison, two different models have been used acquired with the Blensor software. Moreover, to evaluate the effect of the downsampling in a real application, a 3D non-rigid registration is performed with the data sampled. From the experimentation we can conclude that depending on the purpose of the application some kernels of the sampling methods can improve drastically the results. Bilinear- and GNG-based methods provide homogeneous point clouds, but color-based and normal-based provide datasets with higher density of points in areas with specific features. In the non-rigid application, if a color-based sampled point cloud is used, it is possible to properly register two datasets for cases where intensity data are relevant in the model and outperform the results if only a homogeneous sampling is used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El presente proyecto tiene por finalidad investigar los diferentes procesos de aprendizaje e interacciones que tienen lugar través del desarrollo y aplicaciones de estrategias innovadoras en ambientes mediados por las TIC en el ámbito de la Biología, la Física y la Química. Esta propuesta se llevará a cabo a través de cuatro componentes articuladas que se integran a través de su marco teórico y en su utilización en la práctica áulica. El primero de ellos se refiere al desarrollo, aplicación y evaluación de materiales que involucran diferentes procesos centrados en la modelización. Proponemos identificar, adaptar y aplicar una serie de recursos tecnológicos usados en la modelización. Se investigarán distintos aspectos generando dimensiones y categorías de análisis que permitan caracterizarlos. Dentro de ellos se usarán las simulaciones para la enseñanza la Física, específicamente se tratará de identificar las dificultades que presentan los estudiantes al resolver problemas aplicando las Leyes de Newton y generar una propuesta didáctica que incluya una simulación Applet Java para el aprendizaje de este contenido con su correspondiente evaluación. Otro recurso tecnológico que se estudiará tiene que ver con las animaciones llevadas a cabo por computadora. Se utilizará la estrategia Stopmotion para el aprendizaje de diferentes aspectos de la división celular en Biología, con alumnos de escuelas secundaria, investigando los aprendizajes y las producciones realizadas por estudiantes que trabajan de manera no tradicional. También se propone el uso de dos laboratorios virtuales con alumnos del profesorado en Biología, lo que permite comprender conceptos que habitualmente requieren experimentación fáctica, uno para la identificación de ADN a través de electroforesis en gel y el otro para la contaminación del agua. Se propone generar las guías de laboratorio basada en resolución de problemas a investigar la innovación a través de encuestas y entrevistas. Se investigarán el impacto de este recurso, las actitudes de los estudiantes frente a esta estrategia y sus aprendizajes. También se investigará la aplicación de un video juego educativo Kokori en 3D, de distribución gratuita, libre cuyo objetivo es poner en evidencia la comprensión de los procesos metabólicos de las células. Se aplicará a docentes de formación inicial analizando la interacción con los saberes de los estudiantes a través de situaciones en un escenario lúdico. Toda la investigación de esta componente estará centrada en la caracterización de los materiales, su evaluación, los aprendizajes con procesos de modelización. La segunda componente investiga las características que presentan las argumentaciones que se abordan en los procesos de lectura y escritura que se promueven cuando se trabaja con las TIC. Se analizarán las producciones escritas realizadas por docentes y estudiantes en las redes sociales y otros materiales desarrollados y aplicados en la componente anterior. El tercer componente se refiere al estudio de la interacción y comunicación que se promueva entre los participantes de los trabajos virtuales y de las redes sociales que intervienen, tales como Facebook y Twiter. Se considerará las dimensiones, categorías e indicadores que dan cuanta de los proceso de comunicación en estos entornos. El último componente, es el análisis de los procesos y negociaciones respecto de los enunciados, como las premisas que representan el conocimiento, que se ponen en juego en distintas propuestas elaboradas por los futuros profesores de biología en recursos como la Webquest. El enfoque metodológico usado integra técnicas y procedimientos cuantitativos y cualitativos. La contribución teórica permitirá caracterizar diferentes aspectos de la enseñanza de las ciencias naturales introduciendo TIC y como aporte novedoso se espera consolidar una red de comunicaciones entre los docentes, los estudiantes y los investigadores involucrados en el proyecto.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of analogue model experiments in geology is to simulate structures in nature under specific imposed boundary conditions using materials whose rheological properties are similar to those of rocks in nature. In the late 1980s, X-ray computed tomography (CT) was first applied to the analysis of such models. In early studies only a limited number of cross-sectional slices could be recorded because of the time involved in CT data acquisition, the long cooling periods for the X-ray source and computational capacity. Technological improvements presently allow an almost unlimited number of closely spaced serial cross-sections to be acquired and calculated. Computer visualization software allows a full 3D analysis of every recorded stage. Such analyses are especially valuable when trying to understand complex geological structures, commonly with lateral changes in 3D geometry. Periodic acquisition of volumetric data sets in the course of the experiment makes it possible to carry out a 4D analysis of the model, i.e. 3D analysis through time. Examples are shown of 4D analysis of analogue models that tested the influence of lateral rheological changes on the structures obtained in contractional and extensional settings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The power required to operate large gyratory mills often exceeds 10 MW. Hence, optimisation of the power consumption will have a significant impact on the overall economic performance and environmental impact of the mineral processing plant. In most of the published models of tumbling mills (e.g. [Morrell, S., 1996. Power draw of wet tumbling mills and its relationship to charge dynamics, Part 2: An empirical approach to modelling of mill power draw. Trans. Inst. Mining Metall. (Section C: Mineral Processing Ext. Metall.) 105, C54-C62. Austin, L.G., 1990. A mill power equation for SAG mills. Miner. Metall. Process. 57-62]), the effect of lifter design and its interaction with mill speed and filling are not incorporated. Recent experience suggests that there is an opportunity for improving grinding efficiency by choosing the appropriate combination of these variables. However, it is difficult to experimentally determine the interactions of these variables in a full scale mill. Although some work has recently been published using DEM simulations, it was basically. limited to 2D. The discrete element code, Particle Flow Code 3D (PFC3D), has been used in this work to model the effects of lifter height (525 cm) and mill speed (50-90% of critical) on the power draw and frequency distribution of specific energy (J/kg) of normal impacts in a 5 m diameter autogenous (AG) mill. It was found that the distribution of the impact energy is affected by the number of lifters, lifter height, mill speed and mill filling. Interactions of lifter design, mill speed and mill filling are demonstrated through three dimensional distinct element methods (3D DEM) modelling. The intensity of the induced stresses (shear and normal) on lifters, and hence the lifter wear, is also simulated. (C) 2004 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Virtual Learning Environment (VLE) is one of the fastest growing areas in educational technology research and development. In order to achieve learning effectiveness, ideal VLEs should be able to identify learning needs and customize solutions, with or without an instructor to supplement instruction. They are called Personalized VLEs (PVLEs). In order to achieve PVLEs success, comprehensive conceptual models corresponding to PVLEs are essential. Such conceptual modeling development is important because it facilitates early detection and correction of system development errors. Therefore, in order to capture the PVLEs knowledge explicitly, this paper focuses on the development of conceptual models for PVLEs, including models of knowledge primitives in terms of learner, curriculum, and situational models, models of VLEs in general pedagogical bases, and particularly, the definition of the ontology of PVLEs on the constructivist pedagogical principle. Based on those comprehensive conceptual models, a prototyped multiagent-based PVLE has been implemented. A field experiment was conducted to investigate the learning achievements by comparing personalized and non-personalized systems. The result indicates that the PVLE we developed under our comprehensive ontology successfully provides significant learning achievements. These comprehensive models also provide a solid knowledge representation framework for PVLEs development practice, guiding the analysis, design, and development of PVLEs. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Network building and exchange of information by people within networks is crucial to the innovation process. Contrary to older models, in social networks the flow of information is noncontinuous and nonlinear. There are critical barriers to information flow that operate in a problematic manner. New models and new analytic tools are needed for these systems. This paper introduces the concept of virtual circuits and draws on recent concepts of network modelling and design to introduce a probabilistic switch theory that can be described using matrices. It can be used to model multistep information flow between people within organisational networks, to provide formal definitions of efficient and balanced networks and to describe distortion of information as it passes along human communication channels. The concept of multi-dimensional information space arises naturally from the use of matrices. The theory and the use of serial diagonal matrices have applications to organisational design and to the modelling of other systems. It is hypothesised that opinion leaders or creative individuals are more likely to emerge at information-rich nodes in networks. A mathematical definition of such nodes is developed and it does not invariably correspond with centrality as defined by early work on networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Langerhans cells (LCs) can be targeted with DNA-coated gold micro-projectiles ("Gene Gun") to induce potent cellular and humoral immune responses. It is likely that the relative volumetric distribution of LCs and keratinocytes within the epidermis impacts on the efficacy of Gene Gun immunization protocols. This study quantified the three-dimensional (3D) distribution of LCs and keratinocytes in the mouse skin model with a near-infrared multiphoton laser-scanning microscope (NIR-MPLSM). Stratum corneum (SC) and viable epidermal thickness measured with MPLSM was found in close agreement with conventional histology. LCs were located in the vertical plane at a mean depth of 14.9 mum, less than 3 mum above the dermo-epidermal boundary and with a normal histogram distribution. This likely corresponds to the fact that LCs reside in the suprabasal layer (stratum germinativum). The nuclear volume of keratinocytes was found to be approximately 1.4 times larger than that of resident LCs (88.6 mum3). Importantly, the ratio of LCs to keratinocytes in mouse ear skin (1:15) is more than three times higher than that reported for human breast skin (1:53). Accordingly, cross-presentation may be more significant in clinical Gene Gun applications than in pre-clinical mouse studies. These interspecies differences should be considered in pre-clinical trials using mouse models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the insight gained from 2-D particle models, and given that the dynamics of crustal faults occur in 3-D space, the question remains, how do the 3-D fault gouge dynamics differ from those in 2-D? Traditionally, 2-D modeling has been preferred over 3-D simulations because of the computational cost of solving 3-D problems. However, modern high performance computing architectures, combined with a parallel implementation of the Lattice Solid Model (LSM), provide the opportunity to explore 3-D fault micro-mechanics and to advance understanding of effective constitutive relations of fault gouge layers. In this paper, macroscopic friction values from 2-D and 3-D LSM simulations, performed on an SGI Altix 3700 super-cluster, are compared. Two rectangular elastic blocks of bonded particles, with a rough fault plane and separated by a region of randomly sized non-bonded gouge particles, are sheared in opposite directions by normally-loaded driving plates. The results demonstrate that the gouge particles in the 3-D models undergo significant out-of-plane motion during shear. The 3-D models also exhibit a higher mean macroscopic friction than the 2-D models for varying values of interparticle friction. 2-D LSM gouge models have previously been shown to exhibit accelerating energy release in simulated earthquake cycles, supporting the Critical Point hypothesis. The 3-D models are shown to also display accelerating energy release, and good fits of power law time-to-failure functions to the cumulative energy release are obtained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diversos estudos sobre a aceitação e o uso de tecnologias de informação têm sido realizados por diferentes modelos conceituais que buscam demonstrar a existência de fatores que influenciam a intenção e comportamento dos usuários sobre um determinado sistema de informação. Alguns aspectos que potencializam as intenções de uso podem ser fundamentais para que o indivíduo adote, ou não, uma determinada tecnologia. Este estudo analisou os fatores antecedentes que podem influenciar a intenção de uso da Biblioteca Virtual de uma Instituição de Ensino Superior (IES). Dentre os modelos conceituais que estudam a adoção de tecnologias de informação, optou-se nesta pesquisa pelo Modelo de Aceitação de Tecnologia (TAM) proposto por Davis (1986). Neste modelo ainda foram adicionadas as variáveis externas Estímulo Docente e Hábito a fim de ampliar o poder de explicação da intenção de uso dos usuários em questão. Através de uma abordagem de investigação quantitativa, os dados foram coletados por meio de um instrumento de pesquisa com obtenção de resposta de 406 questionários. Os resultados obtidos por esta pesquisa confirmam a influência de diversos fatores posicionados em diferentes dimensões e proporcionam conclusões relevantes à adoção da biblioteca virtual. Conclui-se que o Estímulo Docente influencia positivamente a Facilidade de Uso, Utilidade Percebida e o Habito dos alunos de graduação sobre a intenção de uso da Biblioteca Virtual. Com isso, constatou-se que a influência do professor torna-se um fator determinante na intenção de uso da biblioteca virtual, o que ressalta a importância da orientação e recomendação dos professores no uso desta ferramenta tecnológica. O estudo concluiu que a utilidade percebida revelou-se o fator que mais influencia a intenção de uso desta tecnologia, pois há a percepção de que, caso o aluno venha utilizar a biblioteca virtual, pode-se melhorar seu desempenho nos estudos, entre outros aspectos. O Hábito também pode influenciar à adoção da biblioteca virtual de forma paralela às percepções sobre os benefícios que podem ser obtidos com o uso desta tecnologia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L'obiettivo principale della politica di sicurezza alimentare è quello di garantire la salute dei consumatori attraverso regole e protocolli di sicurezza specifici. Al fine di rispondere ai requisiti di sicurezza alimentare e standardizzazione della qualità, nel 2002 il Parlamento Europeo e il Consiglio dell'UE (Regolamento (CE) 178/2002 (CE, 2002)), hanno cercato di uniformare concetti, principi e procedure in modo da fornire una base comune in materia di disciplina degli alimenti e mangimi provenienti da Stati membri a livello comunitario. La formalizzazione di regole e protocolli di standardizzazione dovrebbe però passare attraverso una più dettagliata e accurata comprensione ed armonizzazione delle proprietà globali (macroscopiche), pseudo-locali (mesoscopiche), ed eventualmente, locali (microscopiche) dei prodotti alimentari. L'obiettivo principale di questa tesi di dottorato è di illustrare come le tecniche computazionali possano rappresentare un valido supporto per l'analisi e ciò tramite (i) l’applicazione di protocolli e (ii) miglioramento delle tecniche ampiamente applicate. Una dimostrazione diretta delle potenzialità già offerte dagli approcci computazionali viene offerta nel primo lavoro in cui un virtual screening basato su docking è stato applicato al fine di valutare la preliminare xeno-androgenicità di alcuni contaminanti alimentari. Il secondo e terzo lavoro riguardano lo sviluppo e la convalida di nuovi descrittori chimico-fisici in un contesto 3D-QSAR. Denominata HyPhar (Hydrophobic Pharmacophore), la nuova metodologia così messa a punto è stata usata per esplorare il tema della selettività tra bersagli molecolari strutturalmente correlati e ha così dimostrato di possedere i necessari requisiti di applicabilità e adattabilità in un contesto alimentare. Nel complesso, i risultati ci permettono di essere fiduciosi nel potenziale impatto che le tecniche in silico potranno avere nella identificazione e chiarificazione di eventi molecolari implicati negli aspetti tossicologici e nutrizionali degli alimenti.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The primary goal of this research is to design and develop an education technology to support learning in global operations management. The research implements a series of studies to determine the right balance among user requirements, learning methods and applied technologies, on a view of student-centred learning. This research is multidisciplinary by nature, involving topics from various disciplines such as global operations management, curriculum and contemporary learning theory, and computer aided learning. Innovative learning models that emphasise on technological implementation are employed and discussed throughout this research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Geometric information relating to most engineering products is available in the form of orthographic drawings or 2D data files. For many recent computer based applications, such as Computer Integrated Manufacturing (CIM), these data are required in the form of a sophisticated model based on Constructive Solid Geometry (CSG) concepts. A recent novel technique in this area transfers 2D engineering drawings directly into a 3D solid model called `the first approximation'. In many cases, however, this does not represent the real object. In this thesis, a new method is proposed and developed to enhance this model. This method uses the notion of expanding an object in terms of other solid objects, which are either primitive or first approximation models. To achieve this goal, in addition to the prepared subroutine to calculate the first approximation model of input data, two other wireframe models are found for extraction of sub-objects. One is the wireframe representation on input, and the other is the wireframe of the first approximation model. A new fast method is developed for the latter special case wireframe, which is named the `first approximation wireframe model'. This method avoids the use of a solid modeller. Detailed descriptions of algorithms and implementation procedures are given. In these techniques utilisation of dashed line information is also considered in improving the model. Different practical examples are given to illustrate the functioning of the program. Finally, a recursive method is employed to automatically modify the output model towards the real object. Some suggestions for further work are made to increase the domain of objects covered, and provide a commercially usable package. It is concluded that the current method promises the production of accurate models for a large class of objects.