997 resultados para 3D Selection
Resumo:
Immersive virtual environments (IVEs) have the potential to afford natural interaction in the three-dimensional (3D) space around a user. However, interaction performance in 3D mid-air is often reduced and depends on a variety of ergonomics factors, the user's endurance, muscular strength, as well as fitness. In particular, in contrast to traditional desktop-based setups, users often cannot rest their arms in a comfortable pose during the interaction. In this article we analyze the impact of comfort on 3D selection tasks in an immersive desktop setup. First, in a pre-study we identified how comfortable or uncomfortable specific interaction positions and poses are for users who are standing upright. Then, we investigated differences in 3D selection task performance when users interact with their hands in a comfortable or uncomfortable body pose, while sitting on a chair in front of a table while the VE was displayed on a headmounted display (HMD). We conducted a Fitts' Law experiment to evaluate selection performance in different poses. The results suggest that users achieve a significantly higher performance in a comfortable pose when they rest their elbow on the table.
Resumo:
© 2015 IEEE.In virtual reality applications, there is an aim to provide real time graphics which run at high refresh rates. However, there are many situations in which this is not possible due to simulation or rendering issues. When running at low frame rates, several aspects of the user experience are affected. For example, each frame is displayed for an extended period of time, causing a high persistence image artifact. The effect of this artifact is that movement may lose continuity, and the image jumps from one frame to another. In this paper, we discuss our initial exploration of the effects of high persistence frames caused by low refresh rates and compare it to high frame rates and to a technique we developed to mitigate the effects of low frame rates. In this technique, the low frame rate simulation images are displayed with low persistence by blanking out the display during the extra time such image would be displayed. In order to isolate the visual effects, we constructed a simulator for low and high persistence displays that does not affect input latency. A controlled user study comparing the three conditions for the tasks of 3D selection and navigation was conducted. Results indicate that the low persistence display technique may not negatively impact user experience or performance as compared to the high persistence case. Directions for future work on the use of low persistence displays for low frame rate situations are discussed.
Resumo:
In most recent substructuring methods, a fundamental role is played by the coarse space. For some of these methods (e.g. BDDC and FETI-DP), its definition relies on a 'minimal' set of coarse nodes (sometimes called corners) which assures invertibility of local subdomain problems and also of the global coarse problem. This basic set is typically enhanced by enforcing continuity of functions at some generalized degrees of freedom, such as average values on edges or faces of subdomains. We revisit existing algorithms for selection of corners. The main contribution of this paper consists of proposing a new heuristic algorithm for this purpose. Considering faces as the basic building blocks of the interface, inherent parallelism, and better robustness with respect to disconnected subdomains are among features of the new technique. The advantages of the presented algorithm in comparison to some earlier approaches are demonstrated on three engineering problems of structural analysis solved by the BDDC method.
Resumo:
We present a new technique called‘Tilt Menu’ for better extending selection capabilities of pen-based interfaces.The Tilt Menu is implemented by using 3D orientation information of pen devices while performing selection tasks.The Tilt Menu has the potential to aid traditional onehanded techniques as it simultaneously generates the secondary input (e.g., a command or parameter selection) while drawing/interacting with a pen tip without having to use the second hand or another device. We conduct two experiments to explore the performance of the Tilt Menu. In the first experiment, we analyze the effect of parameters of the Tilt Menu, such as the menu size and orientation of the item, on its usability. Results of the first experiment suggest some design guidelines for the Tilt Menu. In the second experiment, the Tilt Menu is compared to two types of techniques while performing connect-the-dot tasks using freeform drawing mechanism. Results of the second experiment show that the Tilt Menu perform better in comparison to the Tool Palette, and is as good as the Toolglass.
Resumo:
In this study, we investigate the fabrication of 3D porous poly(lactic-co-glycolic acid) (PLGA) scaffolds using the thermally-induced phase separation technique. The current study focuses on the selection of alternative solvents for this process using a number of criteria, including predicted solubility. toxicity, removability and processability. Solvents were removed via either vacuum freeze-drying or leaching, depending on their physical properties. The residual solvent was tested using gas chromatography-mass spectrometry. A large range of porous, highly interconnected scaffold architectures with tunable pore size and alignment was obtained, including combined macro- and microporous structures and an entirely novel 'porous-fibre' structure. The morphological features of the most promising poly(lactic-co-glycolic acid) scaffolds were analysed via scanning electron microscopy and X-ray micro-computed tomography in both two and three dimensions. The Young's moduli of the scaffolds under conditions of temperature, pH and ionic strength similar to those found in the body were tested and were found to be highly dependent on the architectures.
Resumo:
Aims: To develop clinical protocols for acquiring PET images, performing CT-PET registration and tumour volume definition based on the PET image data, for radiotherapy for lung cancer patients and then to test these protocols with respect to levels of accuracy and reproducibility. Method: A phantom-based quality assurance study of the processes associated with using registered CT and PET scans for tumour volume definition was conducted to: (1) investigate image acquisition and manipulation techniques for registering and contouring CT and PET images in a radiotherapy treatment planning system, and (2) determine technology-based errors in the registration and contouring processes. The outcomes of the phantom image based quality assurance study were used to determine clinical protocols. Protocols were developed for (1) acquiring patient PET image data for incorporation into the 3DCRT process, particularly for ensuring that the patient is positioned in their treatment position; (2) CT-PET image registration techniques and (3) GTV definition using the PET image data. The developed clinical protocols were tested using retrospective clinical trials to assess levels of inter-user variability which may be attributed to the use of these protocols. A Siemens Somatom Open Sensation 20 slice CT scanner and a Philips Allegro stand-alone PET scanner were used to acquire the images for this research. The Philips Pinnacle3 treatment planning system was used to perform the image registration and contouring of the CT and PET images. Results: Both the attenuation-corrected and transmission images obtained from standard whole-body PET staging clinical scanning protocols were acquired and imported into the treatment planning system for the phantom-based quality assurance study. Protocols for manipulating the PET images in the treatment planning system, particularly for quantifying uptake in volumes of interest and window levels for accurate geometric visualisation were determined. The automatic registration algorithms were found to have sub-voxel levels of accuracy, with transmission scan-based CT-PET registration more accurate than emission scan-based registration of the phantom images. Respiration induced image artifacts were not found to influence registration accuracy while inadequate pre-registration over-lap of the CT and PET images was found to result in large registration errors. A threshold value based on a percentage of the maximum uptake within a volume of interest was found to accurately contour the different features of the phantom despite the lower spatial resolution of the PET images. Appropriate selection of the threshold value is dependant on target-to-background ratios and the presence of respiratory motion. The results from the phantom-based study were used to design, implement and test clinical CT-PET fusion protocols. The patient PET image acquisition protocols enabled patients to be successfully identified and positioned in their radiotherapy treatment position during the acquisition of their whole-body PET staging scan. While automatic registration techniques were found to reduce inter-user variation compared to manual techniques, there was no significant difference in the registration outcomes for transmission or emission scan-based registration of the patient images, using the protocol. Tumour volumes contoured on registered patient CT-PET images using the tested threshold values and viewing windows determined from the phantom study, demonstrated less inter-user variation for the primary tumour volume contours than those contoured using only the patient’s planning CT scans. Conclusions: The developed clinical protocols allow a patient’s whole-body PET staging scan to be incorporated, manipulated and quantified in the treatment planning process to improve the accuracy of gross tumour volume localisation in 3D conformal radiotherapy for lung cancer. Image registration protocols which factor in potential software-based errors combined with adequate user training are recommended to increase the accuracy and reproducibility of registration outcomes. A semi-automated adaptive threshold contouring technique incorporating a PET windowing protocol, accurately defines the geometric edge of a tumour volume using PET image data from a stand alone PET scanner, including 4D target volumes.
Resumo:
Single particle analysis (SPA) coupled with high-resolution electron cryo-microscopy is emerging as a powerful technique for the structure determination of membrane protein complexes and soluble macromolecular assemblies. Current estimates suggest that ∼104–105 particle projections are required to attain a 3 Å resolution 3D reconstruction (symmetry dependent). Selecting this number of molecular projections differing in size, shape and symmetry is a rate-limiting step for the automation of 3D image reconstruction. Here, we present SwarmPS, a feature rich GUI based software package to manage large scale, semi-automated particle picking projects. The software provides cross-correlation and edge-detection algorithms. Algorithm-specific parameters are transparently and automatically determined through user interaction with the image, rather than by trial and error. Other features include multiple image handling (∼102), local and global particle selection options, interactive image freezing, automatic particle centering, and full manual override to correct false positives and negatives. SwarmPS is user friendly, flexible, extensible, fast, and capable of exporting boxed out projection images, or particle coordinates, compatible with downstream image processing suites.
Resumo:
The design of pre-contoured fracture fixation implants (plates and nails) that correctly fit the anatomy of a patient utilises 3D models of long bones with accurate geometric representation. 3D data is usually available from computed tomography (CT) scans of human cadavers that generally represent the above 60 year old age group. Thus, despite the fact that half of the seriously injured population comes from the 30 year age group and below, virtually no data exists from these younger age groups to inform the design of implants that optimally fit patients from these groups. Hence, relevant bone data from these age groups is required. The current gold standard for acquiring such data–CT–involves ionising radiation and cannot be used to scan healthy human volunteers. Magnetic resonance imaging (MRI) has been shown to be a potential alternative in the previous studies conducted using small bones (tarsal bones) and parts of the long bones. However, in order to use MRI effectively for 3D reconstruction of human long bones, further validations using long bones and appropriate reference standards are required. Accurate reconstruction of 3D models from CT or MRI data sets requires an accurate image segmentation method. Currently available sophisticated segmentation methods involve complex programming and mathematics that researchers are not trained to perform. Therefore, an accurate but relatively simple segmentation method is required for segmentation of CT and MRI data. Furthermore, some of the limitations of 1.5T MRI such as very long scanning times and poor contrast in articular regions can potentially be reduced by using higher field 3T MRI imaging. However, a quantification of the signal to noise ratio (SNR) gain at the bone - soft tissue interface should be performed; this is not reported in the literature. As MRI scanning of long bones has very long scanning times, the acquired images are more prone to motion artefacts due to random movements of the subject‟s limbs. One of the artefacts observed is the step artefact that is believed to occur from the random movements of the volunteer during a scan. This needs to be corrected before the models can be used for implant design. As the first aim, this study investigated two segmentation methods: intensity thresholding and Canny edge detection as accurate but simple segmentation methods for segmentation of MRI and CT data. The second aim was to investigate the usability of MRI as a radiation free imaging alternative to CT for reconstruction of 3D models of long bones. The third aim was to use 3T MRI to improve the poor contrast in articular regions and long scanning times of current MRI. The fourth and final aim was to minimise the step artefact using 3D modelling techniques. The segmentation methods were investigated using CT scans of five ovine femora. The single level thresholding was performed using a visually selected threshold level to segment the complete femur. For multilevel thresholding, multiple threshold levels calculated from the threshold selection method were used for the proximal, diaphyseal and distal regions of the femur. Canny edge detection was used by delineating the outer and inner contour of 2D images and then combining them to generate the 3D model. Models generated from these methods were compared to the reference standard generated using the mechanical contact scans of the denuded bone. The second aim was achieved using CT and MRI scans of five ovine femora and segmenting them using the multilevel threshold method. A surface geometric comparison was conducted between CT based, MRI based and reference models. To quantitatively compare the 1.5T images to the 3T MRI images, the right lower limbs of five healthy volunteers were scanned using scanners from the same manufacturer. The images obtained using the identical protocols were compared by means of SNR and contrast to noise ratio (CNR) of muscle, bone marrow and bone. In order to correct the step artefact in the final 3D models, the step was simulated in five ovine femora scanned with a 3T MRI scanner. The step was corrected using the iterative closest point (ICP) algorithm based aligning method. The present study demonstrated that the multi-threshold approach in combination with the threshold selection method can generate 3D models from long bones with an average deviation of 0.18 mm. The same was 0.24 mm of the single threshold method. There was a significant statistical difference between the accuracy of models generated by the two methods. In comparison, the Canny edge detection method generated average deviation of 0.20 mm. MRI based models exhibited 0.23 mm average deviation in comparison to the 0.18 mm average deviation of CT based models. The differences were not statistically significant. 3T MRI improved the contrast in the bone–muscle interfaces of most anatomical regions of femora and tibiae, potentially improving the inaccuracies conferred by poor contrast of the articular regions. Using the robust ICP algorithm to align the 3D surfaces, the step artefact that occurred by the volunteer moving the leg was corrected, generating errors of 0.32 ± 0.02 mm when compared with the reference standard. The study concludes that magnetic resonance imaging, together with simple multilevel thresholding segmentation, is able to produce 3D models of long bones with accurate geometric representations. The method is, therefore, a potential alternative to the current gold standard CT imaging.
Resumo:
Introduction This study examines and compares the dosimetric quality of radiotherapy treatment plans for prostate carcinoma across a cohort of 163 patients treated across 5 centres: 83 treated with three-dimensional conformal radiotherapy (3DCRT), 33 treated with intensity-modulated radiotherapy (IMRT) and 47 treated with volumetric-modulated arc therapy (VMAT). Methods Treatment plan quality was evaluated in terms of target dose homogeneity and organ-at-risk sparing, through the use of a set of dose metrics. These included the mean, maximum and minimum doses; the homogeneity and conformity indices for the target volumes; and a selection of dose coverage values that were relevant to each organ-at-risk. Statistical significance was evaluated using two-tailed Welch’s T-tests. The Monte Carlo DICOM ToolKit software was adapted to permit the evaluation of dose metrics from DICOM data exported from a commercial radiotherapy treatment planning system. Results The 3DCRT treatment plans offered greater planning target volume dose homogeneity than the other two treatment modalities. The IMRT and VMAT plans offered greater dose reduction in the organs-at-risk: with increased compliance with recommended organ-at-risk dose constraints, compared to conventional 3DCRT treatments. When compared to each other, IMRT and VMAT did not provide significantly different treatment plan quality for like-sized tumour volumes. Conclusions This study indicates that IMRT and VMAT have provided similar dosimetric quality, which is superior to the dosimetric quality achieved with 3DCRT.
Resumo:
Background Supine imaging modalities provide valuable 3D information on scoliotic anatomy, but the altered spine geometry between the supine and standing positions affects the Cobb angle measurement. Previous studies report a mean 7°-10° Cobb angle increase from supine to standing, but none have reported the effect of endplate pre-selection or whether other parameters affect this Cobb angle difference. Methods Cobb angles from existing coronal radiographs were compared to those on existing low-dose CT scans taken within three months of the reference radiograph for a group of females with adolescent idiopathic scoliosis. Reformatted coronal CT images were used to measure supine Cobb angles with and without endplate pre-selection (end-plates selected from the radiographs) by two observers on three separate occasions. Inter and intra-observer measurement variability were assessed. Multi-linear regression was used to investigate whether there was a relationship between supine to standing Cobb angle change and eight variables: patient age, mass, standing Cobb angle, Risser sign, ligament laxity, Lenke type, fulcrum flexibility and time delay between radiograph and CT scan. Results Fifty-two patients with right thoracic Lenke Type 1 curves and mean age 14.6 years (SD 1.8) were included. The mean Cobb angle on standing radiographs was 51.9° (SD 6.7). The mean Cobb angle on supine CT images without pre-selection of endplates was 41.1° (SD 6.4). The mean Cobb angle on supine CT images with endplate pre-selection was 40.5° (SD 6.6). Pre-selecting vertebral endplates increased the mean Cobb change by 0.6° (SD 2.3, range −9° to 6°). When free to do so, observers chose different levels for the end vertebrae in 39% of cases. Multi-linear regression revealed a statistically significant relationship between supine to standing Cobb change and fulcrum flexibility (p = 0.001), age (p = 0.027) and standing Cobb angle (p < 0.001). The 95% confidence intervals for intra-observer and inter-observer measurement variability were 3.1° and 3.6°, respectively. Conclusions Pre-selecting vertebral endplates causes minor changes to the mean supine to standing Cobb change. There is a statistically significant relationship between supine to standing Cobb change and fulcrum flexibility such that this difference can be considered a potential alternative measure of spinal flexibility.
Resumo:
We study the influence of the choice of template in tensor-based morphometry. Using 3D brain MR images from 10 monozygotic twin pairs, we defined a tensor-based distance in the log-Euclidean framework [1] between each image pair in the study. Relative to this metric, twin pairs were found to be closer to each other on average than random pairings, consistent with evidence that brain structure is under strong genetic control. We also computed the intraclass correlation and associated permutation p-value at each voxel for the determinant of the Jacobian matrix of the transformation. The cumulative distribution function (cdf) of the p-values was found at each voxel for each of the templates and compared to the null distribution. Surprisingly, there was very little difference between CDFs of statistics computed from analyses using different templates. As the brain with least log-Euclidean deformation cost, the mean template defined here avoids the blurring caused by creating a synthetic image from a population, and when selected from a large population, avoids bias by being geometrically centered, in a metric that is sensitive enough to anatomical similarity that it can even detect genetic affinity among anatomies.
Resumo:
Black point in wheat has the potential to cost the Australian industry $A30.4 million a year. It is difficult and expensive to screen for resistance, so the aim of this study was to validate 3 previously identified quantitative trait loci (QTLs) for black point resistance on chromosomes 2B, 4A, and 3D of the wheat variety Sunco. Black point resistance data and simple sequence repeat (SSR) markers, linked to the resistance QTLs and suited to high-throughput assay, were analysed in the doubled haploid population, Batavia (susceptible) × Pelsart (resistant). Sunco and Pelsart both have Cook in their pedigree and both have the Triticum timopheevii translocation on 2B. SSR markers identified for the 3 genetic regions were gwm319 (2B, T. timopheevii translocation), wmc048 (4AS), and gwm341 (3DS). Gwm319 and wmc048 were associated with black point resistance in the validation population. Gwm341 may have an epistatic influence on the trait because when resistance alleles were present at both gwm319 and wmc048, the Batavia-derived allele at gwm341 was associated with a higher proportion of resistant lines. Data are presented showing the level of enrichment achieved for black point resistance, using 1, 2, or 3 of these molecular markers, and the number of associated discarded resistant lines. The level of population enrichment was found to be 1.83-fold with 6 of 17 resistant lines discarded when gwm319 and wmc048 were both used for selection. Interactions among the 3 QTLs appear complex and other genetic and epigenetic factors influence susceptibility to black point. Polymorphism was assessed for these markers within potential breeding material. This indicated that alternative markers to wmc048 may be required for some parental combinations. Based on these results, marker-assisted selection for the major black point resistance QTLs can increase the rate of genetic gain by improving the selection efficiency and may facilitate stacking of black point resistances from different sources.
Resumo:
Relay selection for cooperative communications promises significant performance improvements, and is, therefore, attracting considerable attention. While several criteria have been proposed for selecting one or more relays, distributed mechanisms that perform the selection have received relatively less attention. In this paper, we develop a novel, yet simple, asymptotic analysis of a splitting-based multiple access selection algorithm to find the single best relay. The analysis leads to simpler and alternate expressions for the average number of slots required to find the best user. By introducing a new contention load' parameter, the analysis shows that the parameter settings used in the existing literature can be improved upon. New and simple bounds are also derived. Furthermore, we propose a new algorithm that addresses the general problem of selecting the best Q >= 1 relays, and analyze and optimize it. Even for a large number of relays, the scalable algorithm selects the best two relays within 4.406 slots and the best three within 6.491 slots, on average. We also propose a new and simple scheme for the practically relevant case of discrete metrics. Altogether, our results develop a unifying perspective about the general problem of distributed selection in cooperative systems and several other multi-node systems.