998 resultados para 3D-route


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Orthopaedic fracture fixation implants are increasingly being designed using accurate 3D models of long bones based on computer tomography (CT). Unlike CT, magnetic resonance imaging (MRI) does not involve ionising radiation and is therefore a desirable alternative to CT. This study aims to quantify the accuracy of MRI-based 3D models compared to CT-based 3D models of long bones. The femora of five intact cadaver ovine limbs were scanned using a 1.5T MRI and a CT scanner. Image segmentation of CT and MRI data was performed using a multi-threshold segmentation method. Reference models were generated by digitising the bone surfaces free of soft tissue with a mechanical contact scanner. The MRI- and CT-derived models were validated against the reference models. The results demonstrated that the CT-based models contained an average error of 0.15mm while the MRI-based models contained an average error of 0.23mm. Statistical validation shows that there are no significant differences between 3D models based on CT and MRI data. These results indicate that the geometric accuracy of MRI based 3D models was comparable to that of CT-based models and therefore MRI is a potential alternative to CT for generation of 3D models with high geometric accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A simple and efficient route for the synthesis of cyclic polymer systems is presented. Linear furan protected α-maleimide-ω-cyclopentadienyl functionalized precursors (poly(methyl methacrylate) and poly(tert-butyl acrylate)) were synthesized via atom transfer radical polymerization (ATRP) and subsequent substitution of the bromine end-group with cyclopentadiene. Upon heating at high dilution, deprotection of the dieneophile occurs followed by an intramolecular Diels–Alder reaction yielding a high purity cyclic product.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gait recognition approaches continue to struggle with challenges including view-invariance, low-resolution data, robustness to unconstrained environments, and fluctuating gait patterns due to subjects carrying goods or wearing different clothes. Although computationally expensive, model based techniques offer promise over appearance based techniques for these challenges as they gather gait features and interpret gait dynamics in skeleton form. In this paper, we propose a fast 3D ellipsoidal-based gait recognition algorithm using a 3D voxel model derived from multi-view silhouette images. This approach directly solves the limitations of view dependency and self-occlusion in existing ellipse fitting model-based approaches. Voxel models are segmented into four components (left and right legs, above and below the knee), and ellipsoids are fitted to each region using eigenvalue decomposition. Features derived from the ellipsoid parameters are modeled using a Fourier representation to retain the temporal dynamic pattern for classification. We demonstrate the proposed approach using the CMU MoBo database and show that an improvement of 15-20% can be achieved over a 2D ellipse fitting baseline.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reports an investigation of primary school children’s understandings about "square". 12 students participated in a small group teaching experiment session, where they were interviewed and guided to construct a square in a 3D virtual reality learning environment (VRLE). Main findings include mixed levels of "quasi" geometrical understandings, misconceptions about length and angles, and ambiguous uses of geometrical language for location, direction, and movement. These have implications for future teaching and learning about 2D shapes with particular reference to VRLE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present a novel algorithm for localization during navigation that performs matching over local image sequences. Instead of calculating the single location most likely to correspond to a current visual scene, the approach finds candidate matching locations within every section (subroute) of all learned routes. Through this approach, we reduce the demands upon the image processing front-end, requiring it to only be able to correctly pick the best matching image from within a short local image sequence, rather than globally. We applied this algorithm to a challenging downhill mountainbiking visual dataset where there was significant perceptual or environment change between repeated traverses of the environment, and compared performance to applying the feature-based algorithm FAB-MAP. The results demonstrate the potential for localization using visual sequences, even when there are no visual features that can be reliably detected.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Virtual environments can provide, through digital games and online social interfaces, extremely exciting forms of interactive entertainment. Because of their capability in displaying and manipulating information in natural and intuitive ways, such environments have found extensive applications in decision support, education and training in the health and science domains amongst others. Currently, the burden of validating both the interactive functionality and visual consistency of a virtual environment content is entirely carried out by developers and play-testers. While considerable research has been conducted in assisting the design of virtual world content and mechanics, to date, only limited contributions have been made regarding the automatic testing of the underpinning graphics software and hardware. The aim of this thesis is to determine whether the correctness of the images generated by a virtual environment can be quantitatively defined, and automatically measured, in order to facilitate the validation of the content. In an attempt to provide an environment-independent definition of visual consistency, a number of classification approaches were developed. First, a novel model-based object description was proposed in order to enable reasoning about the color and geometry change of virtual entities during a play-session. From such an analysis, two view-based connectionist approaches were developed to map from geometry and color spaces to a single, environment-independent, geometric transformation space; we used such a mapping to predict the correct visualization of the scene. Finally, an appearance-based aliasing detector was developed to show how incorrectness too, can be quantified for debugging purposes. Since computer games heavily rely on the use of highly complex and interactive virtual worlds, they provide an excellent test bed against which to develop, calibrate and validate our techniques. Experiments were conducted on a game engine and other virtual worlds prototypes to determine the applicability and effectiveness of our algorithms. The results show that quantifying visual correctness in virtual scenes is a feasible enterprise, and that effective automatic bug detection can be performed through the techniques we have developed. We expect these techniques to find application in large 3D games and virtual world studios that require a scalable solution to testing their virtual world software and digital content.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since the availability of 3D full body scanners and the associated software systems for operations with large point clouds, 3D anthropometry has been marketed as a breakthrough and milestone in ergonomic design. The assumptions made by the representatives of the 3D paradigm need to be critically reviewed though. 3D anthropometry has advantages as well as shortfalls, which need to be carefully considered. While it is apparent that the measurement of a full body point cloud allows for easier storage of raw data and improves quality control, the difficulties in calculation of standardized measurements from the point cloud are widely underestimated. Early studies that made use of 3D point clouds to derive anthropometric dimensions have shown unacceptable deviations from the standardized results measured manually. While 3D human point clouds provide a valuable tool to replicate specific single persons for further virtual studies, or personalize garment, their use in ergonomic design must be critically assessed. Ergonomic, volumetric problems are defined by their 2-dimensional boundary or one dimensional sections. A 1D/2D approach is therefore sufficient to solve an ergonomic design problem. As a consequence, all modern 3D human manikins are defined by the underlying anthropometric girths (2D) and lengths/widths (1D), which can be measured efficiently using manual techniques. Traditionally, Ergonomists have taken a statistical approach to design for generalized percentiles of the population rather than for a single user. The underlying method is based on the distribution function of meaningful single and two-dimensional anthropometric variables. Compared to these variables, the distribution of human volume has no ergonomic relevance. On the other hand, if volume is to be seen as a two-dimensional integral or distribution function of length and girth, the calculation of combined percentiles – a common ergonomic requirement - is undefined. Consequently, we suggest to critically review the cost and use of 3D anthropometry. We also recommend making proper use of widely available single and 2-dimensional anthropometric data in ergonomic design.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The design of pre-contoured fracture fixation implants (plates and nails) that correctly fit the anatomy of a patient utilises 3D models of long bones with accurate geometric representation. 3D data is usually available from computed tomography (CT) scans of human cadavers that generally represent the above 60 year old age group. Thus, despite the fact that half of the seriously injured population comes from the 30 year age group and below, virtually no data exists from these younger age groups to inform the design of implants that optimally fit patients from these groups. Hence, relevant bone data from these age groups is required. The current gold standard for acquiring such data–CT–involves ionising radiation and cannot be used to scan healthy human volunteers. Magnetic resonance imaging (MRI) has been shown to be a potential alternative in the previous studies conducted using small bones (tarsal bones) and parts of the long bones. However, in order to use MRI effectively for 3D reconstruction of human long bones, further validations using long bones and appropriate reference standards are required. Accurate reconstruction of 3D models from CT or MRI data sets requires an accurate image segmentation method. Currently available sophisticated segmentation methods involve complex programming and mathematics that researchers are not trained to perform. Therefore, an accurate but relatively simple segmentation method is required for segmentation of CT and MRI data. Furthermore, some of the limitations of 1.5T MRI such as very long scanning times and poor contrast in articular regions can potentially be reduced by using higher field 3T MRI imaging. However, a quantification of the signal to noise ratio (SNR) gain at the bone - soft tissue interface should be performed; this is not reported in the literature. As MRI scanning of long bones has very long scanning times, the acquired images are more prone to motion artefacts due to random movements of the subject‟s limbs. One of the artefacts observed is the step artefact that is believed to occur from the random movements of the volunteer during a scan. This needs to be corrected before the models can be used for implant design. As the first aim, this study investigated two segmentation methods: intensity thresholding and Canny edge detection as accurate but simple segmentation methods for segmentation of MRI and CT data. The second aim was to investigate the usability of MRI as a radiation free imaging alternative to CT for reconstruction of 3D models of long bones. The third aim was to use 3T MRI to improve the poor contrast in articular regions and long scanning times of current MRI. The fourth and final aim was to minimise the step artefact using 3D modelling techniques. The segmentation methods were investigated using CT scans of five ovine femora. The single level thresholding was performed using a visually selected threshold level to segment the complete femur. For multilevel thresholding, multiple threshold levels calculated from the threshold selection method were used for the proximal, diaphyseal and distal regions of the femur. Canny edge detection was used by delineating the outer and inner contour of 2D images and then combining them to generate the 3D model. Models generated from these methods were compared to the reference standard generated using the mechanical contact scans of the denuded bone. The second aim was achieved using CT and MRI scans of five ovine femora and segmenting them using the multilevel threshold method. A surface geometric comparison was conducted between CT based, MRI based and reference models. To quantitatively compare the 1.5T images to the 3T MRI images, the right lower limbs of five healthy volunteers were scanned using scanners from the same manufacturer. The images obtained using the identical protocols were compared by means of SNR and contrast to noise ratio (CNR) of muscle, bone marrow and bone. In order to correct the step artefact in the final 3D models, the step was simulated in five ovine femora scanned with a 3T MRI scanner. The step was corrected using the iterative closest point (ICP) algorithm based aligning method. The present study demonstrated that the multi-threshold approach in combination with the threshold selection method can generate 3D models from long bones with an average deviation of 0.18 mm. The same was 0.24 mm of the single threshold method. There was a significant statistical difference between the accuracy of models generated by the two methods. In comparison, the Canny edge detection method generated average deviation of 0.20 mm. MRI based models exhibited 0.23 mm average deviation in comparison to the 0.18 mm average deviation of CT based models. The differences were not statistically significant. 3T MRI improved the contrast in the bone–muscle interfaces of most anatomical regions of femora and tibiae, potentially improving the inaccuracies conferred by poor contrast of the articular regions. Using the robust ICP algorithm to align the 3D surfaces, the step artefact that occurred by the volunteer moving the leg was corrected, generating errors of 0.32 ± 0.02 mm when compared with the reference standard. The study concludes that magnetic resonance imaging, together with simple multilevel thresholding segmentation, is able to produce 3D models of long bones with accurate geometric representations. The method is, therefore, a potential alternative to the current gold standard CT imaging.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This series of research vignettes is aimed at sharing current and interesting research findings from our team and other international Entrepreneurship researchers. In this vignette, Professor Per Davidsson considers some of the dynamics associated with firm growth.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the cancer research field, most in vitro studies still rely on two-dimensional (2D) cultures. However, the trend is rapidly shifting towards using a three-dimensional (3D) culture system. This is because 3D models better recapitulate the microenvironment of cells, and therefore, yield cellular and molecular responses that more accurately describe the pathophysiology of cancer. By adopting technology platforms established by the tissue engineering discipline, it is now possible to grow cancer cells in extracellular matrix (ECM)-like environments and dictate the biophysical and biochemical properties of the matrix. In addition, 3D models can be modified to recapitulate different stages of cancer progression for instance from the initial development of tumor to metastasis. Inevitably, to recapitulate a heterotypic condition, comprising more than one cell type, it requires a more complex 3D model. To date, 3D models that are available for studying the prostate cancer (CaP)-bone interactions are still lacking. Therefore, the aim of this study is to establish a co-culture model that allows investigation of direct and indirect CaP-bone interactions. Prior to that, 3D polyethylene glycol (PEG)-based hydrogel cultures for CaP cells were first developed and growth conditions were optimised. Characterization of the 3D hydrogel cultures show that LNCaP cells form a multicellular mass that resembles avascular tumor. In comparison to 2D cultures, besides the difference in cell morphology, the response of LNCaP cells to the androgen analogue (R1881) stimulation is different compared to the cells in 2D cultures. This discrepancy between 2D and 3D cultures is likely associated with the cell-cell contact, density and ligand-receptor interactions. Following the 3D monoculture study, a 3D direct co-culture model of CaP cells and the human tissue engineered bone (hTEBC) construct was developed. Interactions between the CaP cells and human osteoblasts (hOBs) resulted in elevation of Matrix Metalloproteinase 9 (MMP9) for PC-3 cells and Prostate Specific Antigen (PSA) for LNCaP cells. To further investigate the paracrine interaction of CaP cells and (hOBs), a 3D indirect co-culture model was developed, where LNCaP cells embedded within PEG hydrogels were co-cultured with hTEBC. It was found that the cellular changes observed reflect the early event of CaP colonizing the bone site. In the absence of androgens, interestingly, up-regulation of PSA and other kallikreins is also detected in the co-culture compared to the LNCaP monoculture. This non androgenic stimulation could be triggered by the soluble factors secreted by the hOB such as Interleukin-6. There are also decrease in alkaline phosphatase (ALP) activity and down-regulation of genes of the hOB when co-cultured with LNCaP cells that have not been previously described. These genes include transforming growth factor β1 (TGFβ1), osteocalcin and Vimentin. However, no changes to epithelial markers (e.g E-cadherin, Cytokeratin 8) were observed in both cell types from the co-culture. Some of these intriguing changes observed in the co-cultures that had not been previously described have enriched the basic knowledge of the CaP cell-bone interaction. From this study, we have shown evidence of the feasibility and versatility of our established 3D models. These models can be adapted to test various hypotheses for studies pertaining to underlying mechanisms of bone metastasis and could provide a vehicle for anticancer drug screening purposes in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Calcium silicate (CaSiO3, CS) ceramics have received significant attention for application in bone regeneration due to their excellent in vitro apatite-mineralization ability; however, how to prepare porous CS scaffolds with a controllable pore structure for bone tissue engineering still remains a challenge. Conventional methods could not efficiently control the pore structure and mechanical strength of CS scaffolds, resulting in unstable in vivo osteogenesis. The aim of this study is to set out to solve these problems by applying a modified 3D-printing method to prepare highly uniform CS scaffolds with controllable pore structure and improved mechanical strength. The in vivo osteogenesis of the prepared 3D-printed CS scaffolds was further investigated by implanting them in the femur defects of rats. The results show that the CS scaffolds prepared by the modified 3D-printing method have uniform scaffold morphology. The pore size and pore structure of CS scaffolds can be efficiently adjusted. The compressive strength of 3D-printed CS scaffolds is around 120 times that of conventional polyurethane templated CS scaffolds. 3D-Printed CS scaffolds possess excellent apatite-mineralization ability in simulated body fluids. Micro-CT analysis has shown that 3D-printed CS scaffolds play an important role in assisting the regeneration of bone defects in vivo. The healing level of bone defects implanted by 3D-printed CS scaffolds is obviously higher than that of 3D-printed b-tricalcium phosphate (b-TCP) scaffolds at both 4 and 8 weeks. Hematoxylin and eosin (H&E) staining shows that 3D-printed CS scaffolds induce higher quality of the newly formed bone than 3D-printed b-TCP scaffolds. Immunohistochemical analyses have further shown that stronger expression of human type I collagen (COL1) and alkaline phosphate (ALP) in the bone matrix occurs in the 3D-printed CS scaffolds than in the 3D-printed b-TCP scaffolds. Considering these important advantages, such as controllable structure architecture, significant improvement in mechanical strength, excellent in vivo osteogenesis and since there is no need for second-time sintering, it is indicated that the prepared 3D-printed CS scaffolds are a promising material for application in bone regeneration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we use a sequence-based visual localization algorithm to reveal surprising answers to the question, how much visual information is actually needed to conduct effective navigation? The algorithm actively searches for the best local image matches within a sliding window of short route segments or 'sub-routes', and matches sub-routes by searching for coherent sequences of local image matches. In contract to many existing techniques, the technique requires no pre-training or camera parameter calibration. We compare the algorithm's performance to the state-of-the-art FAB-MAP 2.0 algorithm on a 70 km benchmark dataset. Performance matches or exceeds the state of the art feature-based localization technique using images as small as 4 pixels, fields of view reduced by a factor of 250, and pixel bit depths reduced to 2 bits. We present further results demonstrating the system localizing in an office environment with near 100% precision using two 7 bit Lego light sensors, as well as using 16 and 32 pixel images from a motorbike race and a mountain rally car stage. By demonstrating how little image information is required to achieve localization along a route, we hope to stimulate future 'low fidelity' approaches to visual navigation that complement probabilistic feature-based techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Learning and then recognizing a route, whether travelled during the day or at night, in clear or inclement weather, and in summer or winter is a challenging task for state of the art algorithms in computer vision and robotics. In this paper, we present a new approach to visual navigation under changing conditions dubbed SeqSLAM. Instead of calculating the single location most likely given a current image, our approach calculates the best candidate matching location within every local navigation sequence. Localization is then achieved by recognizing coherent sequences of these “local best matches”. This approach removes the need for global matching performance by the vision front-end - instead it must only pick the best match within any short sequence of images. The approach is applicable over environment changes that render traditional feature-based techniques ineffective. Using two car-mounted camera datasets we demonstrate the effectiveness of the algorithm and compare it to one of the most successful feature-based SLAM algorithms, FAB-MAP. The perceptual change in the datasets is extreme; repeated traverses through environments during the day and then in the middle of the night, at times separated by months or years and in opposite seasons, and in clear weather and extremely heavy rain. While the feature-based method fails, the sequence-based algorithm is able to match trajectory segments at 100% precision with recall rates of up to 60%.