941 resultados para Digital mammographic images
Resumo:
In this study x-ray CT has been used to produce a 3D image of an irradiated PAGAT gel sample, with noise-reduction achieved using the ‘zero-scan’ method. The gel was repeatedly CT scanned and a linear fit to the varying Hounsfield unit of each pixel in the 3D volume was evaluated across the repeated scans, allowing a zero-scan extrapolation of the image to be obtained. To minimise heating of the CT scanner’s x-ray tube, this study used a large slice thickness (1 cm), to provide image slices across the irradiated region of the gel, and a relatively small number of CT scans (63), to extrapolate the zero-scan image. The resulting set of transverse images shows reduced noise compared to images from the initial CT scan of the gel, without being degraded by the additional radiation dose delivered to the gel during the repeated scanning. The full, 3D image of the gel has a low spatial resolution in the longitudinal direction, due to the selected scan parameters. Nonetheless, important features of the dose distribution are apparent in the 3D x-ray CT scan of the gel. The results of this study demonstrate that the zero-scan extrapolation method can be applied to the reconstruction of multiple x-ray CT slices, to provide useful 2D and 3D images of irradiated dosimetry gels.
Resumo:
Distribution through electronic media provides an avenue for promotion, recognition and an outlet of display for graphic designers. The emergence of available media technologies have enabled graphic designers to extend these boundaries of their practice. In this context the designer is constantly striving for aesthetic success and is strongly influenced by the fashion and trends of contemporary design work. The designer is always in a state of inquiry, finding pathways of discovery that lead to innovation and originality that are highly valued criteria for self-evaluation. This research is based on an analysis of the designer perspective and the processes used within an active graphic design practice specializing entirely within a digital collage domain. Contemporary design methodologies were critically examined, compared and refined to reflect the self-practice of the researcher. The refined methodology may assist designers in maintaining systematic work practices, as well as promote the importance of exploration and experimentation processes. Research findings indicate some differences in the identified methodologies and the design practice of the researcher in the sense that many contemporary designers are not confined to a client-base but are self-generating design images influenced by contemporary practitioners. As well as confirming some aspects of more conventional design processes, the researcher found that accidental discoveries and the designer’s interaction with technology plays a significant part in the design process.
Resumo:
Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.
Resumo:
As a decentralised communication technology, the Internet has offered much autonomy and unprecedented communication freedom to the Chinese public. Yet the Chinese government has imposed different forms of censorship over cyberspace. However, the Hong Kong erotic photo scandal reshuffles the traditional understanding of censorship in China as it points to a different territory. The paper takes the Hong Kong erotic photo scandal in 2008 as a case study and aims to examine the social and generational conflicts hidden in China. When thousands of photos containing sexually explicit images of Hong Kong celebrities were released on the Internet, gossip, controversies and eroticism fuelled the public discussion and threatened traditional values in China. The Internet provides an alternative space for the young Chinese who have been excluded from mainstream social discourse to engage in public debates. This, however, creates concerns, fear and even anger among the older generations in China, because they can no longer control, monitor and educate their children in the way that their predecessors have done for centuries. The photo scandal illustrates the internal social conflicts and distrust between generations in China and the generational conflict has a far-reaching political ramification as it creates a new concept of censorship.
Resumo:
Literacy educator Kathy Mills, observes that creating multimodal and digital texts is an essential part of the national English curriculum in Australia. Here, she presents five practical and engaging ways to transform conventional writing tasks in a digital world.
Resumo:
This chapter investigates the relationship between technical and operational skills and the development of conceptual knowledge and literacy in Media Arts learning. It argues that there is a relationship between the stories, expressions and ideas that students aim to produce with communications media, and their ability to realise these in material form through technical processes in specific material contexts. Our claim is that there is a relationship between the technical and the operational, along with material relations and the development of conceptual knowledge and literacy in media arts learning. We place more emphasis on the material aspects of literacy than is usually the case in socio-cultural accounts of media literacy. We provide examples from a current project to demonstrate that it is just as important to address the material as it is the discursive and conceptual when considering how students develop media literacy in classroom spaces.
Resumo:
Organizations make increasingly use of social media in order to compete for customer awareness and improve the quality of their goods and services. Multiple techniques of social media analysis are already in use. Nevertheless, theoretical underpinnings and a sound research agenda are still unavailable in this field at the present time. In order to contribute to setting up such an agenda, we introduce digital social signal processing (DSSP) as a new research stream in IS that requires multi-facetted investigations. Our DSSP concept is founded upon a set of four sequential activities: sensing digital social signals that are emitted by individuals on social media; decoding online data of social media in order to reconstruct digital social signals; matching the signals with consumers’ life events; and configuring individualized goods and service offerings tailored to the individual needs of customers. We further contribute to tying loose ends of different research areas together, in order to frame DSSP as a field for further investigation. We conclude with developing a research agenda.
Resumo:
This thesis analysed the theoretical and ontological issues of previous scholarship concerning information technology and indigenous people. As an alternative, the thesis used the framework of actor-network-theory, especially through historiographical and ethnographic techniques. The thesis revealed an assemblage of indigenous/digital enactments striving for relevance and avoiding obsolescence. It also recognised heterogeneities- including user-ambivalences, oscillations, noise, non-coherences and disruptions - as part of the milieu of the daily digital lives of indigenous people. By taking heterogeneities into account, the thesis ensured that the data “speaks for itself” and that social inquiry is not overtaken by ideology and ontology.
Resumo:
In this paper, we present three counterfeiting attacks on the block-wise dependent fragile watermarking schemes. We consider vulnerabilities such as the exploitation of a weak correlation among block-wise dependent watermarks to modify valid watermarked %(medical or other digital) images, where they could still be verified as authentic, though they are actually not. Experimental results successfully demonstrate the practicability and consequences of the proposed attacks for some relevant schemes. The development of the proposed attack models can be used as a means to systematically examine the security levels of similar watermarking schemes.
Resumo:
This paper presents research findings and design strategies that illustrate how digital technology can be applied as a tool for hybrid placemaking in ways that would not be possible in purely digital or physical space. Digital technology has revolutionised the way people learn and gather new information. This trend has challenged the role of the library as a physical place, as well as the interplay of digital and physical aspects of the library. The paper provides an overview of how the penetration of digital technology into everyday life has affected the library as a place, both as designed by place makers, and, as perceived by library users. It then identifies a gap in current library research about the use of digital technology as a tool for placemaking, and reports results from a study of Gelatine – a custom built user check-in system that displays real-time user information on a set of public screens. Gelatine and its evaluation at The Edge, at State Library of Queensland illustrates how combining affordances of social, spatial and digital space can improve the connected learning experience among on-site visitors. Future design strategies involving gamifying the user experience in libraries are described based on Gelatine’s infrastructure. The presented design ideas and concepts are relevant for managers and designers of libraries as well as other informal, social learning environments.
Resumo:
This paper reports on an adaptation of Callon and Law’s (1995) hybrid collectif derived from research conducted on the usage of mobile phones and internet technologies among the iTadian indigenous people of the Cordillera region, northern Philippines. Results brings to light an indigenous digital collectif—an emergent effect from the translation of both human and non-human heterogeneous actors as well as pre-existent networks, such as: traditional knowledge and practices, kinship relations, the traditional exchange of goods, modern academic requisites, and advocacies for indigenous rights. This is evinced by the iTadian’s enrolment of internet and mobile phone technologies. Examples include: treating these technologies as an efficient communicative tool, an indicator of well-being, and a portable extension of affective human relationships. Alternatively, counter-enrolment strategies are also at play, which include: establishing rules of acceptable use on SMS texting and internet access based on traditional notions of discretion, privacy, and the customary treatment of the dead. Within the boundaries of this digital collectif reveal imbrications of pre-existing networks like traditional customs, the kinship system across geophysical boundaries, the traditional exchange of mail and other goods, and the advocacy of indigenous rights. These imbrications show that the iTadian digital collectif fluently configures itself to a variety of networked ontologies without losing its character.
Resumo:
The purpose of this paper is to investigate the edge condition between the digital layers and the physical layers of the city and how tangible expressions of the interrelationships between them to create and define new experiences of place, creating hybrid place. To date there has been discussion and investigation into understanding the importance of place, similarly into defining hybrid space. This paper explores principles of place and space to question how they can be applied into defining and proposing the notion of hybrid place in urban environments. The integration of media spaces into architecture provide infrastructure for the development of hybrid place. The physical boundaries of urban spaces become blurred through the integration of media such as computer technologies connecting the physical environment with the digital. Literature and case studies that reflect the current trends of use of technology by people in space and place within urban environments are examined.
Resumo:
We propose a computationally efficient image border pixel based watermark embedding scheme for medical images. We considered the border pixels of a medical image as RONI (region of non-interest), since those pixels have no or little interest to doctors and medical professionals irrespective of the image modalities. Although RONI is used for embedding, our proposed scheme still keeps distortion at a minimum level in the embedding region using the optimum number of least significant bit-planes for the border pixels. All these not only ensure that a watermarked image is safe for diagnosis, but also help minimize the legal and ethical concerns of altering all pixels of medical images in any manner (e.g, reversible or irreversible). The proposed scheme avoids the need for RONI segmentation, which incurs capacity and computational overheads. The performance of the proposed scheme has been compared with a relevant scheme in terms of embedding capacity, image perceptual quality (measured by SSIM and PSNR), and computational efficiency. Our experimental results show that the proposed scheme is computationally efficient, offers an image-content-independent embedding capacity, and maintains a good image quality
Resumo:
This thesis developed and evaluated strategies for social and ubiquitous computing designs that can enhance connected learning and networking opportunities for users in coworking spaces. Based on a social and a technical design intervention deployed at the State Library of Queensland, the research findings illustrate the potential of combining social, spatial and digital affordances in order to nourish peer-to-peer learning, creativity, inspiration, and innovation. The study proposes a hybrid notion of placemaking as a new way of thinking about the design of coworking and interactive learning spaces.