86 resultados para Radiographic Image Interpretation, Computer-Assisted
Resumo:
This thesis addresses the problem of detecting and describing the same scene points in different wide-angle images taken by the same camera at different viewpoints. This is a core competency of many vision-based localisation tasks including visual odometry and visual place recognition. Wide-angle cameras have a large field of view that can exceed a full hemisphere, and the images they produce contain severe radial distortion. When compared to traditional narrow field of view perspective cameras, more accurate estimates of camera egomotion can be found using the images obtained with wide-angle cameras. The ability to accurately estimate camera egomotion is a fundamental primitive of visual odometry, and this is one of the reasons for the increased popularity in the use of wide-angle cameras for this task. Their large field of view also enables them to capture images of the same regions in a scene taken at very different viewpoints, and this makes them suited for visual place recognition. However, the ability to estimate the camera egomotion and recognise the same scene in two different images is dependent on the ability to reliably detect and describe the same scene points, or ‘keypoints’, in the images. Most algorithms used for this purpose are designed almost exclusively for perspective images. Applying algorithms designed for perspective images directly to wide-angle images is problematic as no account is made for the image distortion. The primary contribution of this thesis is the development of two novel keypoint detectors, and a method of keypoint description, designed for wide-angle images. Both reformulate the Scale- Invariant Feature Transform (SIFT) as an image processing operation on the sphere. As the image captured by any central projection wide-angle camera can be mapped to the sphere, applying these variants to an image on the sphere enables keypoints to be detected in a manner that is invariant to image distortion. Each of the variants is required to find the scale-space representation of an image on the sphere, and they differ in the approaches they used to do this. Extensive experiments using real and synthetically generated wide-angle images are used to validate the two new keypoint detectors and the method of keypoint description. The best of these two new keypoint detectors is applied to vision based localisation tasks including visual odometry and visual place recognition using outdoor wide-angle image sequences. As part of this work, the effect of keypoint coordinate selection on the accuracy of egomotion estimates using the Direct Linear Transform (DLT) is investigated, and a simple weighting scheme is proposed which attempts to account for the uncertainty of keypoint positions during detection. A word reliability metric is also developed for use within a visual ‘bag of words’ approach to place recognition.
Resumo:
Camera calibration information is required in order for multiple camera networks to deliver more than the sum of many single camera systems. Methods exist for manually calibrating cameras with high accuracy. Manually calibrating networks with many cameras is, however, time consuming, expensive and impractical for networks that undergo frequent change. For this reason, automatic calibration techniques have been vigorously researched in recent years. Fully automatic calibration methods depend on the ability to automatically find point correspondences between overlapping views. In typical camera networks, cameras are placed far apart to maximise coverage. This is referred to as a wide base-line scenario. Finding sufficient correspondences for camera calibration in wide base-line scenarios presents a significant challenge. This thesis focuses on developing more effective and efficient techniques for finding correspondences in uncalibrated, wide baseline, multiple-camera scenarios. The project consists of two major areas of work. The first is the development of more effective and efficient view covariant local feature extractors. The second area involves finding methods to extract scene information using the information contained in a limited set of matched affine features. Several novel affine adaptation techniques for salient features have been developed. A method is presented for efficiently computing the discrete scale space primal sketch of local image features. A scale selection method was implemented that makes use of the primal sketch. The primal sketch-based scale selection method has several advantages over the existing methods. It allows greater freedom in how the scale space is sampled, enables more accurate scale selection, is more effective at combining different functions for spatial position and scale selection, and leads to greater computational efficiency. Existing affine adaptation methods make use of the second moment matrix to estimate the local affine shape of local image features. In this thesis, it is shown that the Hessian matrix can be used in a similar way to estimate local feature shape. The Hessian matrix is effective for estimating the shape of blob-like structures, but is less effective for corner structures. It is simpler to compute than the second moment matrix, leading to a significant reduction in computational cost. A wide baseline dense correspondence extraction system, called WiDense, is presented in this thesis. It allows the extraction of large numbers of additional accurate correspondences, given only a few initial putative correspondences. It consists of the following algorithms: An affine region alignment algorithm that ensures accurate alignment between matched features; A method for extracting more matches in the vicinity of a matched pair of affine features, using the alignment information contained in the match; An algorithm for extracting large numbers of highly accurate point correspondences from an aligned pair of feature regions. Experiments show that the correspondences generated by the WiDense system improves the success rate of computing the epipolar geometry of very widely separated views. This new method is successful in many cases where the features produced by the best wide baseline matching algorithms are insufficient for computing the scene geometry.
Resumo:
The Guardian reportage of the United Kingdom Member of Parliament (MP) expenses scandal of 2009 used crowdsourcing and computational journalism techniques. Computational journalism can be broadly defined as the application of computer science techniques to the activities of journalism. Its foundation lies in computer assisted reporting techniques and its importance is increasing due to the: (a) increasing availability of large scale government datasets for scrutiny; (b) declining cost, increasing power and ease of use of data mining and filtering software; and Web 2.0; and (c) explosion of online public engagement and opinion.. This paper provides a case study of the Guardian MP expenses scandal reportage and reveals some key challenges and opportunities for digital journalism. It finds journalists may increasingly take an active role in understanding, interpreting, verifying and reporting clues or conclusions that arise from the interrogations of datasets (computational journalism). Secondly a distinction should be made between information reportage and computational journalism in the digital realm, just as a distinction might be made between citizen reporting and citizen journalism. Thirdly, an opportunity exists for online news providers to take a ‘curatorial’ role, selecting and making easily available the best data sources for readers to use (information reportage). These activities have always been fundamental to journalism, however the way in which they are undertaken may change. Findings from this paper may suggest opportunities and challenges for the implementation of computational journalism techniques in practice by digital Australian media providers, and further areas of research.
Resumo:
The Guardian reportage of the United Kingdom Member of Parliament (MP) expenses scandal of 2009 used crowdsourcing and computational journalism techniques. Computational journalism can be broadly defined as the application of computer science techniques to the activities of journalism. Its foundation lies in computer assisted reporting techniques and its importance is increasing due to the: (a) increasing availability of large scale government datasets for scrutiny; (b) declining cost, increasing power and ease of use of data mining and filtering software; and Web 2.0; and (c) explosion of online public engagement and opinion.. This paper provides a case study of the Guardian MP expenses scandal reportage and reveals some key challenges and opportunities for digital journalism. It finds journalists may increasingly take an active role in understanding, interpreting, verifying and reporting clues or conclusions that arise from the interrogations of datasets (computational journalism). Secondly a distinction should be made between information reportage and computational journalism in the digital realm, just as a distinction might be made between citizen reporting and citizen journalism. Thirdly, an opportunity exists for online news providers to take a ‘curatorial’ role, selecting and making easily available the best data sources for readers to use (information reportage). These activities have always been fundamental to journalism, however the way in which they are undertaken may change. Findings from this paper may suggest opportunities and challenges for the implementation of computational journalism techniques in practice by digital Australian media providers, and further areas of research.
Resumo:
This book presents important research advances in the study of teaching and teacher research as well as a review of motivation in education; mentoring; an evaluation of online learning; educational change and computer-assisted teaching.
Resumo:
There are at least four key challenges in the online news environment that computational journalism may address. Firstly, news providers operate in a rapidly evolving environment and larger businesses are typically slower to adapt to market innovations. News consumption patterns have changed and news providers need to find new ways to capture and retain digital users. Meanwhile, declining financial performance has led to cost cuts in mass market newspapers. Finally investigative reporting is typically slow, high cost and may be tedious, and yet is valuable to the reputation of a news provider. Computational journalism involves the application of software and technologies to the activities of journalism, and it draws from the fields of computer science, social science and communications. New technologies may enhance the traditional aims of journalism, or may require “a new breed of people who are midway between technologists and journalists” (Irfan Essa in Mecklin 2009: 3). Historically referred to as ‘computer assisted reporting’, the use of software in online reportage is increasingly valuable due to three factors: larger datasets are becoming publicly available; software is becoming sophisticated and ubiquitous; and the developing Australian digital economy. This paper introduces key elements of computational journalism – it describes why it is needed; what it involves; benefits and challenges; and provides a case study and examples. Computational techniques can quickly provide a solid factual basis for original investigative journalism and may increase interaction with readers, when correctly used. It is a major opportunity to enhance the delivery of original investigative journalism, which ultimately may attract and retain readers online.
Resumo:
This paper describes results of a study evaluating the content, functionality and design features of an innovative online website called the Doorway to Research (http://rsc.acid.net.au/Main.aspx) , which was developed to support international graduate students studying at universities in Australia. First, the key features of the website are described. Second, the result of a pilot study involving 12 students and faculty members who tested key aspects of the design, content and functionality of the website and provided written and oral feedback base on task-based questions and focus group discussions are explored. Finally, recommendations for future development are presented. Results of the study indicate general student satisfaction with the website and its design, content and functionality, with specific areas identified for further development.
Resumo:
The antiretroviral therapy (ART) program for People Living with HIV/AIDS (PLHIV) in Vietnam has been scaled up rapidly in recent years (from 50 clients in 2003 to almost 38,000 in 2009). ART success is highly dependent on the ability of the patients to fully adhere to the prescribed treatment regimen. Despite the remarkable extension of ART programs in Vietnam, HIV/AIDS program managers still have little reliable data on levels of ART adherence and factors that might promote or reduce adherence. Several previous studies in Vietnam estimated extremely high levels of ART adherence among their samples, although there are reasons to question the veracity of the conclusion that adherence is nearly perfect. Further, no study has quantitatively assessed the factors influencing ART adherence. In order to reduce these gaps, this study was designed to include several phases and used a multi-method approach to examine levels of ART non-adherence and its relationship to a range of demographic, clinical, social and psychological factors. The study began with an exploratory qualitative phase employing four focus group discussions and 30 in-depth interviews with PLHIV, peer educators, carers and health care providers (HCPs). Survey interviews were completed with 615 PLHIV in five rural and urban out-patient clinics in northern Vietnam using an Audio Computer Assisted Self-Interview (ACASI) and clinical records extraction. The survey instrument was carefully developed through a systematic procedure to ensure its reliability and validity. Cultural appropriateness was considered in the design and implementation of both the qualitative study and the cross sectional survey. The qualitative study uncovered several contrary perceptions between health care providers and HIV/AIDS patients regarding the true levels of ART adherence. Health care providers often stated that most of their patients closely adhered to their regimens, while PLHIV and their peers reported that “it is not easy” to do so. The quantitative survey findings supported the PLHIV and their peers’ point of view in the qualitative study, because non-adherence to ART was relatively common among the study sample. Using the ACASI technique, the estimated prevalence of onemonth non-adherence measured by the Visual Analogue Scale (VAS) was 24.9% and the prevalence of four-day not-on-time-adherence using the modified Adult AIDS Clinical Trials Group (AACTG) instrument was 29%. Observed agreement between the two measures was 84% and kappa coefficient was 0.60 (SE=0.04 and p<0.0001). The good agreement between the two measures in the current study is consistent with those found in previous research and provides evidence of cross-validation of the estimated adherence levels. The qualitative study was also valuable in suggesting important variables for the survey conceptual framework and instrument development. The survey confirmed significant correlations between two measures of ART adherence (i.e. dose adherence and time adherence) and many factors identified in the qualitative study, but failed to find evidence of significant correlations of some other factors and ART adherence. Non-adherence to ART was significantly associated with untreated depression, heavy alcohol use, illicit drug use, experiences with medication side-effects, chance health locus of control, low quality of information from HCPs, low satisfaction with received support and poor social connectedness. No multivariate association was observed between ART adherence and age, gender, education, duration of ART, the use of adherence aids, disclosure of ART, patients’ ability to initiate communication with HCPs or distance between clinic and patients’ residence. This is the largest study yet reported in Asia to examine non-adherence to ART and its possible determinants. The evidence strongly supports recent calls from other developing nations for HIV/AIDS services to provide screening, counseling and treatment for patients with depressive symptoms, heavy use of alcohol and substance use. Counseling should also address fatalistic beliefs about chance or luck determining health outcomes. The data suggest that adherence could be enhanced by regularly providing information on ART and assisting patients to maintain social connectedness with their family and the community. This study highlights the benefits of using a multi-method approach in examining complex barriers and facilitators of medication adherence. It also demonstrated the utility of the ACASI interview method to enhance open disclosure by people living with HIV/AIDS and thus, increase the veracity of self-reported data.
Resumo:
This study describes the design of a biphasic scaffold composed of a Fused Deposition Modeling scaffold (bone compartment) and an electrospun membrane (periodontal compartment) for periodontal regeneration. In order to achieve simultaneous alveolar bone and periodontal ligament regeneration a cell-based strategy was carried out by combining osteoblast culture in the bone compartment and placement of multiple periodontal ligament (PDL) cell sheets on the electrospun membrane. In vitro data showed that the osteoblasts formed mineralized matrix in the bone compartment after 21 days in culture and that the PDL cell sheet harvesting did not induce significant cell death. The cell-seeded biphasic scaffolds were placed onto a dentin block and implanted for 8 weeks in an athymic rat subcutaneous model. The scaffolds were analyzed by μCT, immunohistochemistry and histology. In the bone compartment, a more intense ALP staining was obtained following seeding with osteoblasts, confirming the μCT results which showed higher mineralization density for these scaffolds. A thin mineralized cementum-like tissue was deposited on the dentin surface for the scaffolds incorporating the multiple PDL cell sheets, as observed by H&E and Azan staining. These scaffolds also demonstrated better attachment onto the dentin surface compared to no attachment when no cell sheets were used. In addition, immunohistochemistry revealed the presence of CEMP1 protein at the interface with the dentine. These results demonstrated that the combination of multiple PDL cell sheets and a biphasic scaffold allows the simultaneous delivery of the cells necessary for in vivo regeneration of alveolar bone, periodontal ligament and cementum. © 2012 Elsevier Ltd.