978 resultados para Radiotherapy, Image-Guided
Resumo:
The reporting and auditing of patient dose is an important component of radiotherapy quality assurance. The manual extraction of dose-volume metrics is time consuming and undesirable when auditing the dosimetric quality of a large cohort of patient plans. A dose assessment application was written to overcome this, allowing the calculation of various dose-volume metrics for large numbers of plans exported from treatment planning systems. This application expanded on the DICOM-handling functionality of the MCDTK software suite. The software extracts dose values in the volume of interest by using a ray casting point-in-polygon algorithm, where the polygons have been defined by the contours in the RTSTRUCT file...
Resumo:
Gel dosimetry and plastic chemical dosimeters such as PresageTM are capable of very accurately mapping dose distributions in three dimensions. Combined with their near tissue equivalence one would expect that after several decades of development they would be the dosimeter of choice for dosimetry, however they have not achieve widespread clinical use. This presentation will include a brief description and history of developments in gels and 3D plastics for dosimetry, the limitations and advantages, and their role in the future.
Resumo:
Established Monte Carlo user codes BEAMnrc and DOSXYZnrc permit the accurate and straightforward simulation of radiotherapy experiments and treatments delivered from multiple beam angles. However, when an electronic portal imaging detector (EPID) is included in these simulations, treatment delivery from non-zero beam angles becomes problematic. This study introduces CTCombine, a purpose-built code for rotating selected CT data volumes, converting CT numbers to mass densities, combining the results with model EPIDs and writing output in a form which can easily be read and used by the dose calculation code DOSXYZnrc...
Resumo:
Recent advances suggest that encoding images through Symmetric Positive Definite (SPD) matrices and then interpreting such matrices as points on Riemannian manifolds can lead to increased classification performance. Taking into account manifold geometry is typically done via (1) embedding the manifolds in tangent spaces, or (2) embedding into Reproducing Kernel Hilbert Spaces (RKHS). While embedding into tangent spaces allows the use of existing Euclidean-based learning algorithms, manifold shape is only approximated which can cause loss of discriminatory information. The RKHS approach retains more of the manifold structure, but may require non-trivial effort to kernelise Euclidean-based learning algorithms. In contrast to the above approaches, in this paper we offer a novel solution that allows SPD matrices to be used with unmodified Euclidean-based learning algorithms, with the true manifold shape well-preserved. Specifically, we propose to project SPD matrices using a set of random projection hyperplanes over RKHS into a random projection space, which leads to representing each matrix as a vector of projection coefficients. Experiments on face recognition, person re-identification and texture classification show that the proposed approach outperforms several recent methods, such as Tensor Sparse Coding, Histogram Plus Epitome, Riemannian Locality Preserving Projection and Relational Divergence Classification.
Resumo:
Traditional nearest points methods use all the samples in an image set to construct a single convex or affine hull model for classification. However, strong artificial features and noisy data may be generated from combinations of training samples when significant intra-class variations and/or noise occur in the image set. Existing multi-model approaches extract local models by clustering each image set individually only once, with fixed clusters used for matching with various image sets. This may not be optimal for discrimination, as undesirable environmental conditions (eg. illumination and pose variations) may result in the two closest clusters representing different characteristics of an object (eg. frontal face being compared to non-frontal face). To address the above problem, we propose a novel approach to enhance nearest points based methods by integrating affine/convex hull classification with an adapted multi-model approach. We first extract multiple local convex hulls from a query image set via maximum margin clustering to diminish the artificial variations and constrain the noise in local convex hulls. We then propose adaptive reference clustering (ARC) to constrain the clustering of each gallery image set by forcing the clusters to have resemblance to the clusters in the query image set. By applying ARC, noisy clusters in the query set can be discarded. Experiments on Honda, MoBo and ETH-80 datasets show that the proposed method outperforms single model approaches and other recent techniques, such as Sparse Approximated Nearest Points, Mutual Subspace Method and Manifold Discriminant Analysis.
Resumo:
Existing multi-model approaches for image set classification extract local models by clustering each image set individually only once, with fixed clusters used for matching with other image sets. However, this may result in the two closest clusters to represent different characteristics of an object, due to different undesirable environmental conditions (such as variations in illumination and pose). To address this problem, we propose to constrain the clustering of each query image set by forcing the clusters to have resemblance to the clusters in the gallery image sets. We first define a Frobenius norm distance between subspaces over Grassmann manifolds based on reconstruction error. We then extract local linear subspaces from a gallery image set via sparse representation. For each local linear subspace, we adaptively construct the corresponding closest subspace from the samples of a probe image set by joint sparse representation. We show that by minimising the sparse representation reconstruction error, we approach the nearest point on a Grassmann manifold. Experiments on Honda, ETH-80 and Cambridge-Gesture datasets show that the proposed method consistently outperforms several other recent techniques, such as Affine Hull based Image Set Distance (AHISD), Sparse Approximated Nearest Points (SANP) and Manifold Discriminant Analysis (MDA).
Resumo:
This thesis investigates the fusion of 3D visual information with 2D image cues to provide 3D semantic maps of large-scale environments in which a robot traverses for robotic applications. A major theme of this thesis was to exploit the availability of 3D information acquired from robot sensors to improve upon 2D object classification alone. The proposed methods have been evaluated on several indoor and outdoor datasets collected from mobile robotic platforms including a quadcopter and ground vehicle covering several kilometres of urban roads.
Resumo:
In the avian model of myopia, retinal image degradation quickly leads to ocular enlargement. We now give evidence that regionally specific changes in ocular size are correlated with both biomechanical indices of scleral remodeling, e.g. hydration capacity and with biochemical changes in proteinase activities. The latter include a 72 kDa matrix metalloproteinase (putatively MMP-2), other gelatin-binding MMPs, an acid pH MMP and a serine protease. Specifically, we have found that increases in scleral hydrational capacity parallel increases in collagen degrading activities. Gelatin zymography reveals that eyes with 7 days of retinal image degradation have elevated levels (1.4-fold) of gelatinolytic activities at 72 and 67 kDa M(r) in equatorial and posterior pole regions of the sclera while, after 14 days of treatment, increases are no longer apparent. Lower M(r) zymographic activities at 50, 46 and 37 kDa M(r) are collectively increased in eyes treated for both 7 and 14 days (1.4- and 2.4-fold respectively) in the equator and posterior pole areas of enlarging eyes. Western blot analyses of scleral extracts with an antibody to human MMP-2 reveals immunoreactive bands at 65, 30 and 25 kDa. Zymograms incubated under slightly acidic conditions reveal that, in enlarging eyes, MMP activities at 25 and 28 kDa M(r) are increased in scleral equator and posterior pole (1.6- and 4.5-fold respectively). A TIMP-like protein is also identified in sclera and cornea by Western blot analysis. Finally, retinal-image degradation also increases (~2.6-fold) the activity of a 23.5 kDa serine proteinase in limbus, equator and posterior pole sclera that is inhibited by aprotinin and soybean trypsin inhibitor. Taken together, these results indicate that eye growth induced by retinal-image degradation involves increases in the activities of multiple scleral proteinases that could modify the biomechanical properties of scleral structural components and contribute to tissue remodeling and growth.
Resumo:
In this paper we describe the approaches adopted to generate the runs submitted to ImageCLEFPhoto 2009 with an aim to promote document diversity in the rankings. Four of our runs are text based approaches that employ textual statistics extracted from the captions of images, i.e. MMR [1] as a state of the art method for result diversification, two approaches that combine relevance information and clustering techniques, and an instantiation of Quantum Probability Ranking Principle. The fifth run exploits visual features of the provided images to re-rank the initial results by means of Factor Analysis. The results reveal that our methods based on only text captions consistently improve the performance of the respective baselines, while the approach that combines visual features with textual statistics shows lower levels of improvements.
Resumo:
This paper introduces a minimalistic approach to produce a visual hybrid map of a mobile robot’s working environment. The proposed system uses omnidirectional images along with odometry information to build an initial dense posegraph map. Then a two level hybrid map is extracted from the dense graph. The hybrid map consists of global and local levels. The global level contains a sparse topological map extracted from the initial graph using a dual clustering approach. The local level contains a spherical view stored at each node of the global level. The spherical views provide both an appearance signature for the nodes, which the robot uses to localize itself in the environment, and heading information when the robot uses the map for visual navigation. In order to show the usefulness of the map, an experiment was conducted where the map was used for multiple visual navigation tasks inside an office workplace.
Resumo:
This review focuses on one of the fundamental phenomena that occur upon application of sufficiently strong electric fields to gases, namely the formation and propagation of ionization waves-streamers. The dynamics of streamers is controlled by strongly nonlinear coupling, in localized streamer tip regions, between enhanced (due to charge separation) electric field and ionization and transport of charged species in the enhanced field. Streamers appear in nature (as initial stages of sparks and lightning, as huge structures-sprites above thunderclouds), and are also found in numerous technological applications of electrical discharges. Here we discuss the fundamental physics of the guided streamer-like structures-plasma bullets which are produced in cold atmospheric-pressure plasma jets. Plasma bullets are guided ionization waves moving in a thin column of a jet of plasma forming gases (e.g.,He or Ar) expanding into ambient air. In contrast to streamers in a free (unbounded) space that propagate in a stochastic manner and often branch, guided ionization waves are repetitive and highly-reproducible and propagate along the same path-the jet axis. This property of guided streamers, in comparison with streamers in a free space, enables many advanced time-resolved experimental studies of ionization waves with nanosecond precision. In particular, experimental studies on manipulation of streamers by external electric fields and streamer interactions are critically examined. This review also introduces the basic theories and recent advances on the experimental and computational studies of guided streamers, in particular related to the propagation dynamics of ionization waves and the various parameters of relevance to plasma streamers. This knowledge is very useful to optimize the efficacy of applications of plasma streamer discharges in various fields ranging from health care and medicine to materials science and nanotechnology.
Resumo:
State and local governments frequently look to flagship cultural projects to improve the city image and catalyze tourism but, in the process, often overlook their potential to foster local arts development. To better understand this role, the article examines if and how cultural institutions in Los Angeles and San Francisco attract and support arts-related activity. The analysis reveals that cultural flagships have mixed success in generating arts-based development and that their ability may be improved through attention to the local context, facility and institutional characteristics, and the approach of the sponsoring agencies. Such knowledge is useful for planners to enhance their revitalization efforts, particularly as the economic development potential of arts organizations and artists has become more apparent.
Resumo:
Dealing with digital medical images is raising many new security problems with legal and ethical complexities for local archiving and distant medical services. These include image retention and fraud, distrust and invasion of privacy. This project was a significant step forward in developing a complete framework for systematically designing, analyzing, and applying digital watermarking, with a particular focus on medical image security. A formal generic watermarking model, three new attack models, and an efficient watermarking technique for medical images were developed. These outcomes contribute to standardizing future research in formal modeling and complete security and computational analysis of watermarking schemes.
Resumo:
While formal definitions and security proofs are well established in some fields like cryptography and steganography, they are not as evident in digital watermarking research. A systematic development of watermarking schemes is desirable, but at present their development is usually informal, ad hoc, and omits the complete realization of application scenarios. This practice not only hinders the choice and use of a suitable scheme for a watermarking application, but also leads to debate about the state-of-the-art for different watermarking applications. With a view to the systematic development of watermarking schemes, we present a formal generic model for digital image watermarking. Considering possible inputs, outputs, and component functions, the initial construction of a basic watermarking model is developed further to incorporate the use of keys. On the basis of our proposed model, fundamental watermarking properties are defined and their importance exemplified for different image applications. We also define a set of possible attacks using our model showing different winning scenarios depending on the adversary capabilities. It is envisaged that with a proper consideration of watermarking properties and adversary actions in different image applications, use of the proposed model would allow a unified treatment of all practically meaningful variants of watermarking schemes.