871 resultados para Content-Base Image Retrieval
Resumo:
Information skills instruction for research candidates bas recently been formalised as coursework at the Queensland University of Technology. Feedback solicited from participants suggests that students benefit from such coursework in a number of ways. Their perception of the value of specific content areas to their literature review and thesis presentation is favourable. A small group of students who participated in Interviews identified five ways in which the coursework assisted the research process. As Instructors continue to work with the post·graduate community it would be useful to deepen our understanding of how such instruction is perceived and the benefits which can be derived from it.
Resumo:
We propose a computationally efficient image border pixel based watermark embedding scheme for medical images. We considered the border pixels of a medical image as RONI (region of non-interest), since those pixels have no or little interest to doctors and medical professionals irrespective of the image modalities. Although RONI is used for embedding, our proposed scheme still keeps distortion at a minimum level in the embedding region using the optimum number of least significant bit-planes for the border pixels. All these not only ensure that a watermarked image is safe for diagnosis, but also help minimize the legal and ethical concerns of altering all pixels of medical images in any manner (e.g, reversible or irreversible). The proposed scheme avoids the need for RONI segmentation, which incurs capacity and computational overheads. The performance of the proposed scheme has been compared with a relevant scheme in terms of embedding capacity, image perceptual quality (measured by SSIM and PSNR), and computational efficiency. Our experimental results show that the proposed scheme is computationally efficient, offers an image-content-independent embedding capacity, and maintains a good image quality
Resumo:
Vacuum cleaners can release large concentrations of particles, both in their exhaust air and from resuspension of settled dust. However, the size, variability and microbial diversity of these emissions are unknown, despite evidence to suggest they may contribute to allergic responses and infection transmission indoors. This study aimed to evaluate bioaerosol emission from various vacuum cleaners. We sampled the air in an experimental flow tunnel where vacuum cleaners were run and their airborne emissions sampled with closed-face cassettes. Dust samples were also 35 collected from the dust bag. Total bacteria, total archaea, Penicillium/Aspergillus and total Clostridium cluster 1 were quantified with specific qPCR protocols and emission rates were calculated. Clostridium botulinum, as well as antibiotic resistance genes were detected in each sample using endpoint PCR. Bacterial diversity was also analyzed using denaturing gel electrophoresis (DGGE), image analysis and band sequencing. We demonstrated that emission of bacteria and moulds (Pen/Asp) can reach values as high as 1E05/min and that those emissions are not related to each other. The bag dust bacterial and mould content was also consistently across the vacuums we assessed, reaching up to 1E07 bacteria or moulds equivalent/g. Antibiotic resistance genes were detected in several samples. No archaea or C. botulinum were detected in any air samples. Diversity analyses showed that most bacteria are from human sources, in keeping with other recent results. These results highlight the potential capability of vacuum cleaners to disseminate appreciable quantities of moulds and human-associated bacteria indoors and their role as a source of exposure to bioaerosols.
Resumo:
A decision-making framework for image-guided radiotherapy (IGRT) is being developed using a Bayesian Network (BN) to graphically describe, and probabilistically quantify, the many interacting factors that are involved in this complex clinical process. Outputs of the BN will provide decision-support for radiation therapists to assist them to make correct inferences relating to the likelihood of treatment delivery accuracy for a given image-guided set-up correction. The framework is being developed as a dynamic object-oriented BN, allowing for complex modelling with specific sub-regions, as well as representation of the sequential decision-making and belief updating associated with IGRT. A prototype graphic structure for the BN was developed by analysing IGRT practices at a local radiotherapy department and incorporating results obtained from a literature review. Clinical stakeholders reviewed the BN to validate its structure. The BN consists of a sub-network for evaluating the accuracy of IGRT practices and technology. The directed acyclic graph (DAG) contains nodes and directional arcs representing the causal relationship between the many interacting factors such as tumour site and its associated critical organs, technology and technique, and inter-user variability. The BN was extended to support on-line and off-line decision-making with respect to treatment plan compliance. Following conceptualisation of the framework, the BN will be quantified. It is anticipated that the finalised decision-making framework will provide a foundation to develop better decision-support strategies and automated correction algorithms for IGRT.
Resumo:
One method of addressing the shortage of science and mathematics teachers is to train scientists and other science-related professionals to become teachers. Advocates argue that as discipline experts these career changers can relate the subject matter knowledge to various contexts and applications in teaching. In this paper, through interviews and classroom observations with a former scientist and her students, we examine how one career changer used her expertise in microbiology to teach microscopy. These data provided the basis for a description of the teacher’s instruction which was then analysed for components of domain knowledge for teaching. Consistent with the literature, the findings revealed that this career changer needed to develop her pedagogical knowledge. However, an interesting finding was that the teacher’s subject matter as a science teacher differed substantively from her knowledge as a scientist. This finding challenges the assumption that subject matter is readily transferable across professions and provides insight into how to better prepare and support career changers to transition from scientist to science teacher.
Resumo:
We present a study to understand the effect that negated terms (e.g., "no fever") and family history (e.g., "family history of diabetes") have on searching clinical records. Our analysis is aimed at devising the most effective means of handling negation and family history. In doing so, we explicitly represent a clinical record according to its different content types: negated, family history and normal content; the retrieval model weights each of these separately. Empirical evaluation shows that overall the presence of negation harms retrieval effectiveness while family history has little effect. We show negation is best handled by weighting negated content (rather than the common practise of removing or replacing it). However, we also show that many queries benefit from the inclusion of negated content and that negation is optimally handled on a per-query basis. Additional evaluation shows that adaptive handing of negated and family history content can have significant benefits.
Resumo:
Dealing with digital medical images is raising many new security problems with legal and ethical complexities for local archiving and distant medical services. These include image retention and fraud, distrust and invasion of privacy. This project was a significant step forward in developing a complete framework for systematically designing, analyzing, and applying digital watermarking, with a particular focus on medical image security. A formal generic watermarking model, three new attack models, and an efficient watermarking technique for medical images were developed. These outcomes contribute to standardizing future research in formal modeling and complete security and computational analysis of watermarking schemes.
Resumo:
Dihalomethanes can produce liver tumors in mice but not in rats, and concern exists about the risk of these compounds to humans. Glutathione (GSH) conjugation of dihalomethanes has been considered to be a critical event in the bioactivation process, and risk assessment is based upon this premise; however, there is little experimental support for this view or information about the basis of genotoxicity. A plasmid vector containing rat GSH S-transferase 5-5 was transfected into the Salmonella typhimurium tester strain TA1535, which then produced active enzyme. The transfected bacteria produced base-pair revertants in the presence of ethylene dihalides or dihalomethanes, in the order CH2Br2 > CH2BrCl > CH2Cl2. However, revertants were not seen when cells were exposed to GSH, CH2Br2, and an amount of purified GSH S-transferase 5-5 (20-fold excess in amount of that expressed within the cells). HCHO, which is an end product of the reaction of GSH with dihalomethanes, also did not produce mutations. S-(1-Acetoxymethyl)GSH was prepared as an analog of the putative S-(1-halomethyl)GSH reactive intermediates. This analog did not produce revertants, consistent with the view that activation of dihalomethanes must occur within the bacteria to cause genetic damage, presenting a model to be considered in studies with mammalian cells. S-(1-Acetoxymethyl)GSH reacted with 2′-deoxyguanosine to yield a major adduct, identified as S-[1-(N2-deoxyguanosinyl)methyl]GSH. Demonstration of the activation of dihalomethanes by this mammalian GSH S-transferase theta class enzyme should be of use in evaluating the risk of these chemicals, particularly in light of reports of the polymorphic expression of a similar activity in humans.
Resumo:
Despite ongoing improvements in behaviour change strategies, licensing models and road law enforcement measures young drivers remain significantly over-represented in fatal and non-fatal road related crashes. This paper focuses on the safety of those approaching driving age and identifies both high priority road safety messages and relevant peer-led strategies to guide the development school programs. It summarises the review in a program logic model built around the messages and identified curriculum elements, as they may be best operationalised within the licensing and school contexts in Victoria. This paper summarises a review of common deliberate risk-taking and non-deliberate unsafe driving behaviours among novice drivers, highlighting risks associated with speeding, driving while fatigued, driving while impaired and carrying passengers. Common beliefs of young people that predict risky driving were reviewed, particularly with consideration of those beliefs that can be operationalised in a behaviour change school program. Key components of adolescent risk behaviour change programs were also reviewed, which identified a number of strategies for incorporation in a school based behaviour change program, including: a well-structured theoretical design and delivery, thoughtfully considered peer-selected processes, adequate training and supervision of peer facilitators, a process for monitoring and sustainability, and interactive delivery and participant discussions. The research base is then summarised in a program logic model with further discussion about the quality of the current state of knowledge of evaluation of behaviour change programs and the need for considerable development in program evaluation.
Resumo:
Non-rigid image registration is an essential tool required for overcoming the inherent local anatomical variations that exist between images acquired from different individuals or atlases. Furthermore, certain applications require this type of registration to operate across images acquired from different imaging modalities. One popular local approach for estimating this registration is a block matching procedure utilising the mutual information criterion. However, previous block matching procedures generate a sparse deformation field containing displacement estimates at uniformly spaced locations. This neglects to make use of the evidence that block matching results are dependent on the amount of local information content. This paper presents a solution to this drawback by proposing the use of a Reversible Jump Markov Chain Monte Carlo statistical procedure to optimally select grid points of interest. Three different methods are then compared to propagate the estimated sparse deformation field to the entire image including a thin-plate spline warp, Gaussian convolution, and a hybrid fluid technique. Results show that non-rigid registration can be improved by using the proposed algorithm to optimally select grid points of interest.
Resumo:
Affect is an important feature of multimedia content and conveys valuable information for multimedia indexing and retrieval. Most existing studies for affective content analysis are limited to low-level features or mid-level representations, and are generally criticized for their incapacity to address the gap between low-level features and high-level human affective perception. The facial expressions of subjects in images carry important semantic information that can substantially influence human affective perception, but have been seldom investigated for affective classification of facial images towards practical applications. This paper presents an automatic image emotion detector (IED) for affective classification of practical (or non-laboratory) data using facial expressions, where a lot of “real-world” challenges are present, including pose, illumination, and size variations etc. The proposed method is novel, with its framework designed specifically to overcome these challenges using multi-view versions of face and fiducial point detectors, and a combination of point-based texture and geometry. Performance comparisons of several key parameters of relevant algorithms are conducted to explore the optimum parameters for high accuracy and fast computation speed. A comprehensive set of experiments with existing and new datasets, shows that the method is effective despite pose variations, fast, and appropriate for large-scale data, and as accurate as the method with state-of-the-art performance on laboratory-based data. The proposed method was also applied to affective classification of images from the British Broadcast Corporation (BBC) in a task typical for a practical application providing some valuable insights.
Resumo:
For the first decade of its existence, the concept of citizen journalism has described an approach which was seen as a broadening of the participant base in journalistic processes, but still involved only a comparatively small subset of overall society – for the most part, citizen journalists were news enthusiasts and “political junkies” (Coleman, 2006) who, as some exasperated professional journalists put it, “wouldn’t get a job at a real newspaper” (The Australian, 2007), but nonetheless followed many of the same journalistic principles. The investment – if not of money, then at least of time and effort – involved in setting up a blog or participating in a citizen journalism Website remained substantial enough to prevent the majority of Internet users from engaging in citizen journalist activities to any significant extent; what emerged in the form of news blogs and citizen journalism sites was a new online elite which for some time challenged the hegemony of the existing journalistic elite, but gradually also merged with it. The mass adoption of next-generation social media platforms such as Facebook and Twitter, however, has led to the emergence of a new wave of quasi-journalistic user activities which now much more closely resemble the “random acts of journalism” which JD Lasica envisaged in 2003. Social media are not exclusively or even predominantly used for citizen journalism; instead, citizen journalism is now simply a by-product of user communities engaging in exchanges about the topics which interest them, or tracking emerging stories and events as they happen. Such platforms – and especially Twitter with its system of ad hoc hashtags that enable the rapid exchange of information about issues of interest – provide spaces for users to come together to “work the story” through a process of collaborative gatewatching (Bruns, 2005), content curation, and information evaluation which takes place in real time and brings together everyday users, domain experts, journalists, and potentially even the subjects of the story themselves. Compared to the spaces of news blogs and citizen journalism sites, but also of conventional online news Websites, which are controlled by their respective operators and inherently position user engagement as a secondary activity to content publication, these social media spaces are centred around user interaction, providing a third-party space in which everyday as well as institutional users, laypeople as well as experts converge without being able to control the exchange. Drawing on a number of recent examples, this article will argue that this results in a new dynamic of interaction and enables the emergence of a more broadly-based, decentralised, second wave of citizen engagement in journalistic processes.