287 resultados para image reconstruction
Bone tissue engineering : reconstruction of critical sized segmental bone defects in the ovine tibia
Resumo:
Well-established therapies for bone defects are restricted to bone grafts which face significant disadvantages (limited availability, donor site morbidity, insufficient integration). Therefore, the objective was to develop an alternative approach investigating the regenerative potential of medical grade polycaprolactone-tricalcium phosphate (mPCL-TCP) and silk-hydroxyapatite (silk-HA) scaffolds. Critical sized ovine tibial defects were created and stabilized. Defects were left untreated, reconstructed with autologous bone grafts (ABG) and mPCL-TCP or silk-HA scaffolds. Animals were observed for 12 weeks. X-ray analysis, torsion testing and quantitative computed tomography (CT) analyses were performed. Radiological analysis confirmed the critical nature of the defects. Full defect bridging occurred in the autograft and partial bridging in the mPCL-TCP group. Only little bone formation was observed with silk-HA scaffolds. Biomechanical testing revealed a higher torsional moment/stiffness (p < 0.05) and CT analysis a significantly higher amount of bone formation for the ABG group when compared to the silk-HA group. No significant difference was determined between the ABG and mPCL-TCP groups. The results of this study suggest that mPCL-TCP scaffolds combined can serve as an alternative to autologous bone grafting in long bone defect regeneration. The combination of mPCL-TCP with osteogenic cells or growth factors represents an attractive means to further enhance bone formation.
Resumo:
Teleradiology allows medical images to be transmitted over electronic networks for clinical interpretation, and for improved healthcare access, delivery and standards. Although, such remote transmission of the images is raising various new and complex legal and ethical issues, including image retention and fraud, privacy, malpractice liability, etc., considerations of the security measures used in teleradiology remain unchanged. Addressing this problem naturally warrants investigations on the security measures for their relative functional limitations and for the scope of considering them further. In this paper, starting with various security and privacy standards, the security requirements of medical images as well as expected threats in teleradiology are reviewed. This will make it possible to determine the limitations of the conventional measures used against the expected threats. Further, we thoroughly study the utilization of digital watermarking for teleradiology. Following the key attributes and roles of various watermarking parameters, justification for watermarking over conventional security measures is made in terms of their various objectives, properties, and requirements. We also outline the main objectives of medical image watermarking for teleradiology, and provide recommendations on suitable watermarking techniques and their characterization. Finally, concluding remarks and directions for future research are presented.
Resumo:
Hardly a month goes by within the scientific literature without some new material “X” being reported as a suitable material on which to grow cell type “Y”, for the potential purpose of treating disease “Z”. Thus when fibroin, a protein found in silk, was first proposed as a biomaterial for cell growth [1] it joined a long list of other materials of both natural as well as synthetic origin. Nevertheless, in the second decade of the Asian Century it is perhaps befitting that a material of so much importance to the continent’s cultural and economic history, should become the focus of cutting-edge biomedical research. Sentiments aside, however, silk fibroin possesses quite a unique combination of properties which make it a promising candidate for repairing the eye and especially for treating damage to the cornea, the transparent window at the front of the eye.
Resumo:
Purpose Arbitrary numbers of corneal confocal microscopy images have been used for analysis of corneal subbasal nerve parameters under the implicit assumption that these are a representative sample of the central corneal nerve plexus. The purpose of this study is to present a technique for quantifying the number of random central corneal images required to achieve an acceptable level of accuracy in the measurement of corneal nerve fiber length and branch density. Methods Every possible combination of 2 to 16 images (where 16 was deemed the true mean) of the central corneal subbasal nerve plexus, not overlapping by more than 20%, were assessed for nerve fiber length and branch density in 20 subjects with type 2 diabetes and varying degrees of functional nerve deficit. Mean ratios were calculated to allow comparisons between and within subjects. Results In assessing nerve branch density, eight randomly chosen images not overlapping by more than 20% produced an average that was within 30% of the true mean 95% of the time. A similar sampling strategy of five images was 13% within the true mean 80% of the time for corneal nerve fiber length. Conclusions The “sample combination analysis” presented here can be used to determine the sample size required for a desired level of accuracy of quantification of corneal subbasal nerve parameters. This technique may have applications in other biological sampling studies.
Resumo:
In 2010, the State Library of Queensland (SLQ) donated their out-of-copyright Queensland images to Wikimedia Commons. One direct effect of publishing the collections at Wikimedia Commons is the ability of general audiences to participate and help the library in processing the images in the collection. This paper will discuss a project that explored user participation in the categorisation of the State Library of Queensland digital image collections. The outcomes of this project can be used to gain a better understanding of user participation that lead to improving access to library digital collections. Two techniques for data collection were used: documents analysis and interview. Document analysis was performed on the Wikimedia Commons monthly reports. Meanwhile, interview was used as the main data collection technique in this research. The data collected from document analysis was used to help the researchers to devise appropriate questions for interviews. The interviews were undertaken with participants who were divided into two groups: SLQ staff members and Wikimedians (users who participate in Wikimedia). The two sets of data collected from participants were analysed independently and compared. This method was useful for the researchers to understand the differences between the experiences of categorisation from both the librarians’ and the users’ perspectives. This paper will provide a discussion on the preliminary findings that have emerged from each group participant. This research provides preliminary information about the extent of user participation in the categorisation of SLQ collections in Wikimedia Commons that can be used by SLQ and other interested libraries in describing their digital content by their categorisations to improve user access to the collection in the future.
Rotorcraft collision avoidance using spherical image-based visual servoing and single point features
Resumo:
This paper presents a reactive collision avoidance method for small unmanned rotorcraft using spherical image-based visual servoing. Only a single point feature is used to guide the aircraft in a safe spiral like trajectory around the target, whilst a spherical camera model ensures the target always remains visible. A decision strategy to stop the avoidance control is derived based on the properties of spiral like motion, and the effect of accurate range measurements on the control scheme is discussed. We show that using a poor range estimate does not significantly degrade the collision avoidance performance, thus relaxing the need for accurate range measurements. We present simulated and experimental results using a small quad rotor to validate the approach.
Resumo:
Typical flow fields in a stormwater gross pollutant trap (GPT) with blocked retaining screens were experimentally captured and visualised. Particle image velocimetry (PIV) software was used to capture the flow field data by tracking neutrally buoyant particles with a high speed camera. A technique was developed to apply the Image Based Flow Visualization (IBFV) algorithm to the experimental raw dataset generated by the PIV software. The dataset consisted of scattered 2D point velocity vectors and the IBFV visualisation facilitates flow feature characterisation within the GPT. The flow features played a pivotal role in understanding gross pollutant capture and retention within the GPT. It was found that the IBFV animations revealed otherwise unnoticed flow features and experimental artefacts. For example, a circular tracer marker in the IBFV program visually highlighted streamlines to investigate specific areas and identify the flow features within the GPT.
The backfilled GEI : a cross-capture modality gait feature for frontal and side-view gait recognition
Resumo:
In this paper, we propose a novel direction for gait recognition research by proposing a new capture-modality independent, appearance-based feature which we call the Back-filled Gait Energy Image (BGEI). It can can be constructed from both frontal depth images, as well as the more commonly used side-view silhouettes, allowing the feature to be applied across these two differing capturing systems using the same enrolled database. To evaluate this new feature, a frontally captured depth-based gait dataset was created containing 37 unique subjects, a subset of which also contained sequences captured from the side. The results demonstrate that the BGEI can effectively be used to identify subjects through their gait across these two differing input devices, achieving rank-1 match rate of 100%, in our experiments. We also compare the BGEI against the GEI and GEV in their respective domains, using the CASIA dataset and our depth dataset, showing that it compares favourably against them. The experiments conducted were performed using a sparse representation based classifier with a locally discriminating input feature space, which show significant improvement in performance over other classifiers used in gait recognition literature, achieving state of the art results with the GEI on the CASIA dataset.
Resumo:
Many state of the art vision-based Simultaneous Localisation And Mapping (SLAM) and place recognition systems compute the salience of visual features in their environment. As computing salience can be problematic in radically changing environments new low resolution feature-less systems have been introduced, such as SeqSLAM, all of which consider the whole image. In this paper, we implement a supervised classifier system (UCS) to learn the salience of image regions for place recognition by feature-less systems. SeqSLAM only slightly benefits from the results of training, on the challenging real world Eynsham dataset, as it already appears to filter less useful regions of a panoramic image. However, when recognition is limited to specific image regions performance improves by more than an order of magnitude by utilising the learnt image region saliency. We then investigate whether the region salience generated from the Eynsham dataset generalizes to another car-based dataset using a perspective camera. The results suggest the general applicability of an image region salience mask for optimizing route-based navigation applications.
Resumo:
Since the first destination image studies were published in the early 1970s, the field has become one of the most popular in the tourism literature. While reviews of the destination image literature show no commonly agreed conceptualisation of the construct, researchers have predominantly used structured questionnaires for measurement. There has been criticism that the way some of these scales have been selected means a greater likelihood of attributes being irrelevant to participants. This opens up the risk of stimulating uninformed responses. The issue of uninformed response was first raised as a source of error 60 years ago. However, there has been little, if any, discussion in relation to destination image measurement, studies of which often require participants to provide opinion-driven rather than fact-based responses. This paper reports the trial of a ‘don’t know’ (DK) non-response option for participants in two destination image questionnaires. It is suggested the use of a DK option provides participants with an alternative to i) skipping the question, ii) using the scale midpoint to denote neutrality, or iii) providing an uninformed response. High levels of DK usage by participants can then alert the marketer of the need to improve awareness of destination performance for potential salient attributes.
Resumo:
The rank and census are two filters based on order statistics which have been applied to the image matching problem for stereo pairs. Advantages of these filters include their robustness to radiometric distortion and small amounts of random noise, and their amenability to hardware implementation. In this paper, a new matching algorithm is presented, which provides an overall framework for matching, and is used to compare the rank and census techniques with standard matching metrics. The algorithm was tested using both real stereo pairs and a synthetic pair with ground truth. The rank and census filters were shown to significantly improve performance in the case of radiometric distortion. In all cases, the results obtained were comparable to, if not better than, those obtained using standard matching metrics. Furthermore, the rank and census have the additional advantage that their computational overhead is less than these metrics. For all techniques tested, the difference between the results obtained for the synthetic stereo pair, and the ground truth results was small.
Resumo:
This paper considers the problem of reconstructing the motion of a 3D articulated tree from 2D point correspondences subject to some temporal prior. Hitherto, smooth motion has been encouraged using a trajectory basis, yielding a hard combinatorial problem with time complexity growing exponentially in the number of frames. Branch and bound strategies have previously attempted to curb this complexity whilst maintaining global optimality. However, they provide no guarantee of being more efficient than exhaustive search. Inspired by recent work which reconstructs general trajectories using compact high-pass filters, we develop a dynamic programming approach which scales linearly in the number of frames, leveraging the intrinsically local nature of filter interactions. Extension to affine projection enables reconstruction without estimating cameras.
Resumo:
We applied a texture-based flow visualisation technique to a numerical hydrodynamic model of the Pumicestone Passage in southeast Queensland, Australia. The quality of the visualisations using our flow visualisation tool, are compared with animations generated using more traditional drogue release plot and velocity contour and vector techniques. The texture-based method is found to be far more effective in visualising advective flow within the model domain. In some instances, it also makes it easier for the researcher to identify specific hydrodynamic features within the complex flow regimes of this shallow tidal barrier estuary as compared with the direct and geometric based methods.