4 resultados para XCModel, cad 3d 2d, computer graphic, 64 bit porting, migrazione, analisi statica, metodi formali, modellazione resa rendering

em Digital Commons - Michigan Tech


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Though 3D computer graphics has seen tremendous advancement in the past two decades, most available mechanisms for computer interaction in 3D are high cost and targeted for industry and virtual reality applications. Recent advances in Micro-Electro-Mechanical-System (MEMS) devices have brought forth a variety of new low-cost, low-power, miniature sensors with high accuracy, which are well suited for hand-held devices. In this work a novel design for a 3D computer game controller using inertial sensors is proposed, and a prototype device based on this design is implemented. The design incorporates MEMS accelerometers and gyroscopes from Analog Devices to measure the three components of the acceleration and angular velocity. From these sensor readings, the position and orientation of the hand-held compartment can be calculated using numerical methods. The implemented prototype is utilizes a USB 2.0 compliant interface for power and communication with the host system. A Microchip dsPIC microcontroller is used in the design. This microcontroller integrates the analog to digital converters, the program memory flash, as well as the core processor, on a single integrated circuit. A PC running Microsoft Windows operating system is used as the host machine. Prototype firmware for the microcontroller is developed and tested to establish the communication between the design and the host, and perform the data acquisition and initial filtering of the sensor data. A PC front-end application with a graphical interface is developed to communicate with the device, and allow real-time visualization of the acquired data.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A camera maps 3-dimensional (3D) world space to a 2-dimensional (2D) image space. In the process it loses the depth information, i.e., the distance from the camera focal point to the imaged objects. It is impossible to recover this information from a single image. However, by using two or more images from different viewing angles this information can be recovered, which in turn can be used to obtain the pose (position and orientation) of the camera. Using this pose, a 3D reconstruction of imaged objects in the world can be computed. Numerous algorithms have been proposed and implemented to solve the above problem; these algorithms are commonly called Structure from Motion (SfM). State-of-the-art SfM techniques have been shown to give promising results. However, unlike a Global Positioning System (GPS) or an Inertial Measurement Unit (IMU) which directly give the position and orientation respectively, the camera system estimates it after implementing SfM as mentioned above. This makes the pose obtained from a camera highly sensitive to the images captured and other effects, such as low lighting conditions, poor focus or improper viewing angles. In some applications, for example, an Unmanned Aerial Vehicle (UAV) inspecting a bridge or a robot mapping an environment using Simultaneous Localization and Mapping (SLAM), it is often difficult to capture images with ideal conditions. This report examines the use of SfM methods in such applications and the role of combining multiple sensors, viz., sensor fusion, to achieve more accurate and usable position and reconstruction information. This project investigates the role of sensor fusion in accurately estimating the pose of a camera for the application of 3D reconstruction of a scene. The first set of experiments is conducted in a motion capture room. These results are assumed as ground truth in order to evaluate the strengths and weaknesses of each sensor and to map their coordinate systems. Then a number of scenarios are targeted where SfM fails. The pose estimates obtained from SfM are replaced by those obtained from other sensors and the 3D reconstruction is completed. Quantitative and qualitative comparisons are made between the 3D reconstruction obtained by using only a camera versus that obtained by using the camera along with a LIDAR and/or an IMU. Additionally, the project also works towards the performance issue faced while handling large data sets of high-resolution images by implementing the system on the Superior high performance computing cluster at Michigan Technological University.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The delivery of oxygen, nutrients, and the removal of waste are essential for cellular survival. Culture systems for 3D bone tissue engineering have addressed this issue by utilizing perfusion flow bioreactors that stimulate osteogenic activity through the delivery of oxygen and nutrients by low-shear fluid flow. It is also well established that bone responds to mechanical stimulation, but may desensitize under continuous loading. While perfusion flow and mechanical stimulation are used to increase cellular survival in vitro, 3D tissue-engineered constructs face additional limitations upon in vivo implantation. As it requires significant amounts of time for vascular infiltration by the host, implants are subject to an increased risk of necrosis. One solution is to introduce tissue-engineered bone that has been pre-vascularized through the co-culture of osteoblasts and endothelial cells on 3D constructs. It is unclear from previous studies: 1) how 3D bone tissue constructs will respond to partitioned mechanical stimulation, 2) how gene expression compares in 2D and in 3D, 3) how co-cultures will affect osteoblast activity, and 4) how perfusion flow will affect co-cultures of osteoblasts and endothelial cells. We have used an integrated approach to address these questions by utilizing mechanical stimulation, perfusion flow, and a co-culture technique to increase the success of 3D bone tissue engineering. We measured gene expression of several osteogenic and angiogenic genes in both 2D and 3D (static culture and mechanical stimulation), as well as in 3D cultures subjected to perfusion flow, mechanical stimulation and partitioned mechanical stimulation. Finally, we co-cultured osteoblasts and endothelial cells on 3D scaffolds and subjected them to long-term incubation in either static culture or under perfusion flow to determine changes in gene expression as well as histological measures of osteogenic and angiogenic activity. We discovered that 2D and 3D osteoblast cultures react differently to shear stress, and that partitioning mechanical stimulation does not affect gene expression in our model. Furthermore, our results suggest that perfusion flow may rescue 3D tissue-engineered constructs from hypoxic-like conditions by reducing hypoxia-specific gene expression and increasing histological indices of both osteogenic and angiogenic activity. Future research to elucidate the mechanisms behind these results may contribute to a more mature bone-like structure that integrates more quickly into host tissue, increasing the potential of bone tissue engineering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three-dimensional flow visualization plays an essential role in many areas of science and engineering, such as aero- and hydro-dynamical systems which dominate various physical and natural phenomena. For popular methods such as the streamline visualization to be effective, they should capture the underlying flow features while facilitating user observation and understanding of the flow field in a clear manner. My research mainly focuses on the analysis and visualization of flow fields using various techniques, e.g. information-theoretic techniques and graph-based representations. Since the streamline visualization is a popular technique in flow field visualization, how to select good streamlines to capture flow patterns and how to pick good viewpoints to observe flow fields become critical. We treat streamline selection and viewpoint selection as symmetric problems and solve them simultaneously using the dual information channel [81]. To the best of my knowledge, this is the first attempt in flow visualization to combine these two selection problems in a unified approach. This work selects streamline in a view-independent manner and the selected streamlines will not change for all viewpoints. My another work [56] uses an information-theoretic approach to evaluate the importance of each streamline under various sample viewpoints and presents a solution for view-dependent streamline selection that guarantees coherent streamline update when the view changes gradually. When projecting 3D streamlines to 2D images for viewing, occlusion and clutter become inevitable. To address this challenge, we design FlowGraph [57, 58], a novel compound graph representation that organizes field line clusters and spatiotemporal regions hierarchically for occlusion-free and controllable visual exploration. We enable observation and exploration of the relationships among field line clusters, spatiotemporal regions and their interconnection in the transformed space. Most viewpoint selection methods only consider the external viewpoints outside of the flow field. This will not convey a clear observation when the flow field is clutter on the boundary side. Therefore, we propose a new way to explore flow fields by selecting several internal viewpoints around the flow features inside of the flow field and then generating a B-Spline curve path traversing these viewpoints to provide users with closeup views of the flow field for detailed observation of hidden or occluded internal flow features [54]. This work is also extended to deal with unsteady flow fields. Besides flow field visualization, some other topics relevant to visualization also attract my attention. In iGraph [31], we leverage a distributed system along with a tiled display wall to provide users with high-resolution visual analytics of big image and text collections in real time. Developing pedagogical visualization tools forms my other research focus. Since most cryptography algorithms use sophisticated mathematics, it is difficult for beginners to understand both what the algorithm does and how the algorithm does that. Therefore, we develop a set of visualization tools to provide users with an intuitive way to learn and understand these algorithms.