974 resultados para 3D scene understanding


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The analysis and reconstruction of forensically relevant events, such as traffic accidents, criminal assaults and homicides are based on external and internal morphological findings of the injured or deceased person. For this approach high-tech methods are gaining increasing importance in forensic investigations. The non-contact optical 3D digitising system GOM ATOS is applied as a suitable tool for whole body surface and wound documentation and analysis in order to identify injury-causing instruments and to reconstruct the course of event. In addition to the surface documentation, cross-sectional imaging methods deliver medical internal findings of the body. These 3D data are fused into a whole body model of the deceased. Additional to the findings of the bodies, the injury inflicting instruments and incident scene is documented in 3D. The 3D data of the incident scene, generated by 3D laser scanning and photogrammetry, is also included into the reconstruction. Two cases illustrate the methods. In the fist case a man was shot in his bedroom and the main question was, if the offender shot the man intentionally or accidentally, as he declared. In the second case a woman was hit by a car, driving backwards into a garage. It was unclear if the driver drove backwards once or twice, which would indicate that he willingly injured and killed the woman. With this work, we demonstrate how 3D documentation, data merging and animation enable to answer reconstructive questions regarding the dynamic development of patterned injuries, and how this leads to a real data based reconstruction of the course of event.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, we present a depth-color scene modeling strategy for indoors 3D contents generation. It combines depth and visual information provided by a low-cost active depth camera to improve the accuracy of the acquired depth maps considering the different dynamic nature of the scene elements. Accurate depth and color models of the scene background are iteratively built, and used to detect moving elements in the scene. The acquired depth data is continuously processed with an innovative joint-bilateral filter that efficiently combines depth and visual information thanks to the analysis of an edge-uncertainty map and the detected foreground regions. The main advantages of the proposed approach are: removing depth maps spatial noise and temporal random fluctuations; refining depth data at object boundaries, generating iteratively a robust depth and color background model and an accurate moving object silhouette.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Nowadays, new computers generation provides a high performance that enables to build computationally expensive computer vision applications applied to mobile robotics. Building a map of the environment is a common task of a robot and is an essential part to allow the robots to move through these environments. Traditionally, mobile robots used a combination of several sensors from different technologies. Lasers, sonars and contact sensors have been typically used in any mobile robotic architecture, however color cameras are an important sensor due to we want the robots to use the same information that humans to sense and move through the different environments. Color cameras are cheap and flexible but a lot of work need to be done to give robots enough visual understanding of the scenes. Computer vision algorithms are computational complex problems but nowadays robots have access to different and powerful architectures that can be used for mobile robotics purposes. The advent of low-cost RGB-D sensors like Microsoft Kinect which provide 3D colored point clouds at high frame rates made the computer vision even more relevant in the mobile robotics field. The combination of visual and 3D data allows the systems to use both computer vision and 3D processing and therefore to be aware of more details of the surrounding environment. The research described in this thesis was motivated by the need of scene mapping. Being aware of the surrounding environment is a key feature in many mobile robotics applications from simple robotic navigation to complex surveillance applications. In addition, the acquisition of a 3D model of the scenes is useful in many areas as video games scene modeling where well-known places are reconstructed and added to game systems or advertising where once you get the 3D model of one room the system can add furniture pieces using augmented reality techniques. In this thesis we perform an experimental study of the state-of-the-art registration methods to find which one fits better to our scene mapping purposes. Different methods are tested and analyzed on different scene distributions of visual and geometry appearance. In addition, this thesis proposes two methods for 3d data compression and representation of 3D maps. Our 3D representation proposal is based on the use of Growing Neural Gas (GNG) method. This Self-Organizing Maps (SOMs) has been successfully used for clustering, pattern recognition and topology representation of various kind of data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models without considering time constraints. Self-organising neural models have the ability to provide a good representation of the input space. In particular, the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time consuming, specially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This thesis proposes a hardware implementation leveraging the computing power of modern GPUs which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). Our proposed geometrical 3D compression method seeks to reduce the 3D information using plane detection as basic structure to compress the data. This is due to our target environments are man-made and therefore there are a lot of points that belong to a plane surface. Our proposed method is able to get good compression results in those man-made scenarios. The detected and compressed planes can be also used in other applications as surface reconstruction or plane-based registration algorithms. Finally, we have also demonstrated the goodness of the GPU technologies getting a high performance implementation of a CAD/CAM common technique called Virtual Digitizing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction : For the past decade, three dimensional (3D) culture has served as a foundation for regenerative medicine study. With an increasing awareness of the importance of cell-cell and cell-extracellular matrix interactions which are lacking in 2D culture system, 3D culture system has been employed for many other applications namely cancer research. Through development of various biomaterials and utilization of tissue engineering technology, many in vivo physiological responses are now better understood. The cellular and molecular communication of cancer cells and their microenvironment, for instance can be studied in vitro in 3D culture system without relying on animal models alone. Predilection of prostate cancer (CaP) to bone remains obscure due to the complexity of the mechanisms and lack of proper model for the studies. In this study, we aim to investigate the interaction between CaP cells and osteoblasts simulating the natural bone metastasis. We also further investigate the invasiveness of CaP cells and response of androgen sensitve CaP cells, LNCaP to synthetic androgen.----- Method : Human osteoblast (hOB) scaffolds were prepared by seeding hOB on medical grade polycaprolactone-tricalcium phosphate (mPLC-TCP) scaffolds and induced to produce bone matrix. CaP cell lines namely wild type PC3 (PC3-N), overexpressed prostate specific antigen PC3 (PC3k3s5) and LNCaP were seeded on hOB scaffolds as co-cultures. Morphology of cells was examined by Phalloidin-DAPI and SEM imaging. Gelatin zymography was performed on the 48 hours conditioned media (CM) from co-cultures to determine matrix metalloproteinase (MMP) activity. Gene expression of hOB/LNCaP co-cultures which were treated for 48 hours with 1nM synthetic androgen R1881 were analysed by quantitative real time PCR (qRT-PCR).----- Results : Co-culture of PCC/hOB revealed that the morphology of PCCs on the tissue engineered bone matrix varied from homogenous to heterogenous clusters. Enzymatically inactive pro-MMP2 was detected in CM from hOBs and PCCs cultured on scaffolds. Elevation in MMP9 activity was found only in hOB/PC3N co-culture. hOB/LNCaP co-culture showed increase in expression of key enzymes associated with steroid production which also corresponded to an increase in prostate specific antigen (PSA) and MMP9.----- Conclusions : Upregulation of MMP9 indicates involvement of ECM degradation during cancer invasion and bone metastases. Expression of enzymes involved in CaP progression, PSA, which is not expressed in osteoblasts, demonstrates that crosstalk between PCCs and osteoblasts may play a part in the aggressiveness of CaP. The presence of steroidogenic enzymes, particularly, RDH5, in osteoblasts and stimulated expression in co-culture, may indicate osteoblast production of potent androgens, fuelling cancer cell proliferation. Based on these results, this practical 3D culture system may provide greater understanding into CaP mediated bone metastasis. This allows the role of the CaP/hOB interaction with regards to invasive property and steroidogenesis to be further explored.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the major challenges facing a present day game development company is the removal of bugs from such complex virtual environments. This work presents an approach for measuring the correctness of synthetic scenes generated by a rendering system of a 3D application, such as a computer game. Our approach builds a database of labelled point clouds representing the spatiotemporal colour distribution for the objects present in a sequence of bug-free frames. This is done by converting the position that the pixels take over time into the 3D equivalent points with associated colours. Once the space of labelled points is built, each new image produced from the same game by any rendering system can be analysed by measuring its visual inconsistency in terms of distance from the database. Objects within the scene can be relocated (manually or by the application engine); yet the algorithm is able to perform the image analysis in terms of the 3D structure and colour distribution of samples on the surface of the object. We applied our framework to the publicly available game RacingGame developed for Microsoft(R) Xna(R). Preliminary results show how this approach can be used to detect a variety of visual artifacts generated by the rendering system in a professional quality game engine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditionally, conceptual modelling of business processes involves the use of visual grammars for the representation of, amongst other things, activities, choices and events. These grammars, while very useful for experts, are difficult to understand by naive stakeholders. Annotations of such process models have been developed to assist in understanding aspects of these grammars via map-based approaches, and further work has looked at forms of 3D conceptual models. However, no one has sought to embed the conceptual models into a fully featured 3D world, using the spatial annotations to explicate the underlying model clearly. In this paper, we present an approach to conceptual process model visualisation that enhances a 3D virtual world with annotations representing process constructs, facilitating insight into the developed model. We then present a prototype implementation of a 3D Virtual BPMN Editor that embeds BPMN process models into a 3D world. We show how this gives extra support for tasks performed by the conceptual modeller, providing better process model communication to stakeholders..

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-storey buildings are highly vulnerable to terrorist bombing attacks in various parts of the world. Large numbers of casualties and extensive property damage result not only from blast overpressure, but also from the failing of structural components. Understanding the blast response and damage consequences of reinforced concrete (RC) building frames is therefore important when assessing multi-storey buildings designed to resist normal gravity loads. However, limited research has been conducted to identify the blast response and damage of RC frames in order to assess the vulnerability of entire buildings. This paper discusses the blast response and evaluation of damage of three-dimension (3D) RC rigid frame under potential blast loads scenarios. The explicit finite element modelling and analysis under time history blast pressure loads were carried out by LS DYNA code. Complete 3D RC frame was developed with relevant reinforcement details and material models with strain rate effect. Idealised triangular blast pressures calculated from standard manuals are applied on the front face of the model in the present investigation. The analysis results show the blast response, as displacements and material yielding of the structural elements in the RC frame. The level of damage is evaluated and classified according to the selected load case scenarios. Residual load carrying capacities are evaluated and level of damage was presented by the defined damage indices. This information is necessary to determine the vulnerability of existing multi-storey buildings with RC frames and to identify the level of damage under typical external explosion environments. It also provides basic guidance to the design of new buildings to resist blast loads.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Effective management of groundwater requires stakeholders to have a realistic conceptual understanding of the groundwater systems and hydrological processes.However, groundwater data can be complex, confusing and often difficult for people to comprehend..A powerful way to communicate understanding of groundwater processes, complex subsurface geology and their relationships is through the use of visualisation techniques to create 3D conceptual groundwater models. In addition, the ability to animate, interrogate and interact with 3D models can encourage a higher level of understanding than static images alone. While there are increasing numbers of software tools available for developing and visualising groundwater conceptual models, these packages are often very expensive and are not readily accessible to majority people due to complexity. .The Groundwater Visualisation System (GVS) is a software framework that can be used to develop groundwater visualisation tools aimed specifically at non-technical computer users and those who are not groundwater domain experts. A primary aim of GVS is to provide management support for agencies, and enhancecommunity understanding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The international focus on embracing daylighting for energy efficient lighting purposes and the corporate sector’s indulgence in the perception of workplace and work practice “transparency” has spurned an increase in highly glazed commercial buildings. This in turn has renewed issues of visual comfort and daylight-derived glare for occupants. In order to ascertain evidence, or predict risk, of these events; appraisals of these complex visual environments require detailed information on the luminances present in an occupant’s field of view. Conventional luminance meters are an expensive and time consuming method of achieving these results. To create a luminance map of an occupant’s visual field using such a meter requires too many individual measurements to be a practical measurement technique. The application of digital cameras as luminance measurement devices has solved this problem. With high dynamic range imaging, a single digital image can be created to provide luminances on a pixel-by-pixel level within the broad field of view afforded by a fish-eye lens: virtually replicating an occupant’s visual field and providing rapid yet detailed luminance information for the entire scene. With proper calibration, relatively inexpensive digital cameras can be successfully applied to the task of luminance measurements, placing them in the realm of tools that any lighting professional should own. This paper discusses how a digital camera can become a luminance measurement device and then presents an analysis of results obtained from post occupancy measurements from building assessments conducted by the Mobile Architecture Built Environment Laboratory (MABEL) project. This discussion leads to the important realisation that the placement of such tools in the hands of lighting professionals internationally will provide new opportunities for the lighting community in terms of research on critical issues in lighting such as daylight glare and visual quality and comfort.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Howard East rural area has experienced a rapid growth of small block subdivisions and horticulture over the last 40 years, which has been based on groundwater supply. Early bores in the area provide part of the water supply for Darwin City and are maintained and monitored by NT Power & Water Corporation. The Territory government (NRETAS) has established a monitoring network, and now 48 bores are monitored. However, in the area there are over 2700 private bores that are unregulated.Although NRETAS has both FDM and FEM simulations for the region, community support for potential regulation is sought. To improve stakeholder understanding of the resource QUT was retained by the TRaCKconsortium to develop a 3D visualisation of the groundwater system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cell-cell and cell-matrix interactions play a major role in tumor morphogenesis and cancer metastasis. Therefore, it is crucial to create a model with a biomimetic microenvironment that allows such interactions to fully represent the pathophysiology of a disease for an in vitro study. This is achievable by using three-dimensional (3D) models instead of conventional two-dimensional (2D) cultures with the aid of tissue engineering technology. We are now able to better address the complex intercellular interactions underlying prostate cancer (CaP) bone metastasis through such models. In this study, we assessed the interaction of CaP cells and human osteoblasts (hOBs) within a tissue engineered bone (TEB) construct. Consistent with other in vivo studies, our findings show that intercellular and CaP cell-bone matrix interactions lead to elevated levels of matrix metalloproteinases, steroidogenic enzymes and the CaP biomarker, prostate specific antigen (PSA); all associated with CaP metastasis. Hence, it highlights the physiological relevance of this model. We believe that this model will provide new insights for understanding of the previously poorly understood molecular mechanisms of bone metastasis, which will foster further translational studies, and ultimately offer a potential tool for drug screening. © 2010 Landes Bioscience.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reports on three primary school students’ explorations of 3D rotation in a virtual reality learning environment (VRLE) named VRMath. When asked to investigate if you would face the same direction when you turn right 45 degrees first then roll up 45 degrees, or when you roll up 45 degrees first then turn right 45 degrees, the students found that the different order of the two turns ended up with different directions in the VRLE. This was contrary to the students’ prior predictions based on using pen, paper and body movements. The findings of this study showed the difficulty young children have in perceiving and understanding the non-commutative nature of 3D rotation and the power of the computational VRLE in giving students experiences that they rarely have in real life with 3D manipulations and 3D mental movements.