334 resultados para Photography - Digital techniques
Resumo:
The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.
Resumo:
This paper presents a comprehensive discussion of vegetation management approaches in power line corridors based on aerial remote sensing techniques. We address three issues 1) strategies for risk management in power line corridors, 2) selection of suitable platforms and sensor suite for data collection and 3) the progress in automated data processing techniques for vegetation management. We present initial results from a series of experiments and, challenges and lessons learnt from our project.
Resumo:
In a digital world, users’ Personally Identifiable Information (PII) is normally managed with a system called an Identity Management System (IMS). There are many types of IMSs. There are situations when two or more IMSs need to communicate with each other (such as when a service provider needs to obtain some identity information about a user from a trusted identity provider). There could be interoperability issues when communicating parties use different types of IMS. To facilitate interoperability between different IMSs, an Identity Meta System (IMetS) is normally used. An IMetS can, at least theoretically, join various types of IMSs to make them interoperable and give users the illusion that they are interacting with just one IMS. However, due to the complexity of an IMS, attempting to join various types of IMSs is a technically challenging task, let alone assessing how well an IMetS manages to integrate these IMSs. The first contribution of this thesis is the development of a generic IMS model called the Layered Identity Infrastructure Model (LIIM). Using this model, we develop a set of properties that an ideal IMetS should provide. This idealized form is then used as a benchmark to evaluate existing IMetSs. Different types of IMS provide varying levels of privacy protection support. Unfortunately, as observed by Jøsang et al (2007), there is insufficient privacy protection in many of the existing IMSs. In this thesis, we study and extend a type of privacy enhancing technology known as an Anonymous Credential System (ACS). In particular, we extend the ACS which is built on the cryptographic primitives proposed by Camenisch, Lysyanskaya, and Shoup. We call this system the Camenisch, Lysyanskaya, Shoup - Anonymous Credential System (CLS-ACS). The goal of CLS-ACS is to let users be as anonymous as possible. Unfortunately, CLS-ACS has problems, including (1) the concentration of power to a single entity - known as the Anonymity Revocation Manager (ARM) - who, if malicious, can trivially reveal a user’s PII (resulting in an illegal revocation of the user’s anonymity), and (2) poor performance due to the resource-intensive cryptographic operations required. The second and third contributions of this thesis are the proposal of two protocols that reduce the trust dependencies on the ARM during users’ anonymity revocation. Both protocols distribute trust from the ARM to a set of n referees (n > 1), resulting in a significant reduction of the probability of an anonymity revocation being performed illegally. The first protocol, called the User Centric Anonymity Revocation Protocol (UCARP), allows a user’s anonymity to be revoked in a user-centric manner (that is, the user is aware that his/her anonymity is about to be revoked). The second protocol, called the Anonymity Revocation Protocol with Re-encryption (ARPR), allows a user’s anonymity to be revoked by a service provider in an accountable manner (that is, there is a clear mechanism to determine which entity who can eventually learn - and possibly misuse - the identity of the user). The fourth contribution of this thesis is the proposal of a protocol called the Private Information Escrow bound to Multiple Conditions Protocol (PIEMCP). This protocol is designed to address the performance issue of CLS-ACS by applying the CLS-ACS in a federated single sign-on (FSSO) environment. Our analysis shows that PIEMCP can both reduce the amount of expensive modular exponentiation operations required and lower the risk of illegal revocation of users’ anonymity. Finally, the protocols proposed in this thesis are complex and need to be formally evaluated to ensure that their required security properties are satisfied. In this thesis, we use Coloured Petri nets (CPNs) and its corresponding state space analysis techniques. All of the protocols proposed in this thesis have been formally modeled and verified using these formal techniques. Therefore, the fifth contribution of this thesis is a demonstration of the applicability of CPN and its corresponding analysis techniques in modeling and verifying privacy enhancing protocols. To our knowledge, this is the first time that CPN has been comprehensively applied to model and verify privacy enhancing protocols. From our experience, we also propose several CPN modeling approaches, including complex cryptographic primitives (such as zero-knowledge proof protocol) modeling, attack parameterization, and others. The proposed approaches can be applied to other security protocols, not just privacy enhancing protocols.
Resumo:
The Guardian reportage of the United Kingdom Member of Parliament (MP) expenses scandal of 2009 used crowdsourcing and computational journalism techniques. Computational journalism can be broadly defined as the application of computer science techniques to the activities of journalism. Its foundation lies in computer assisted reporting techniques and its importance is increasing due to the: (a) increasing availability of large scale government datasets for scrutiny; (b) declining cost, increasing power and ease of use of data mining and filtering software; and Web 2.0; and (c) explosion of online public engagement and opinion.. This paper provides a case study of the Guardian MP expenses scandal reportage and reveals some key challenges and opportunities for digital journalism. It finds journalists may increasingly take an active role in understanding, interpreting, verifying and reporting clues or conclusions that arise from the interrogations of datasets (computational journalism). Secondly a distinction should be made between information reportage and computational journalism in the digital realm, just as a distinction might be made between citizen reporting and citizen journalism. Thirdly, an opportunity exists for online news providers to take a ‘curatorial’ role, selecting and making easily available the best data sources for readers to use (information reportage). These activities have always been fundamental to journalism, however the way in which they are undertaken may change. Findings from this paper may suggest opportunities and challenges for the implementation of computational journalism techniques in practice by digital Australian media providers, and further areas of research.
Resumo:
This paper investigates the current turbulent state of copyright in the digital age, and explores the viability of alternative compensation systems that aim to achieve the same goals with fewer negative consequences for consumers and artists. To sustain existing business models associated with creative content, increased recourse to DRM (Digital Rights Management) technologies, designed to restrict access to and usage of digital content, is well underway. Considerable technical challenges associated with DRM systems necessitate increasingly aggressive recourse to the law. A number of controversial aspects of copyright enforcement are discussed and contrasted with those inherent in levy based compensation systems. Lateral exploration of the copyright dilemma may help prevent some undesirable societal impacts, but with powerful coalitions of creative, consumer electronics and information technology industries having enormous vested interest in current models, alternative schemes are frequently treated dismissively. This paper focuses on consideration of alternative models that better suit the digital era whilst achieving a more even balance in the copyright bargain.
Resumo:
Griffith University is developing a digital repository system using HarvestRoad Hive software to better meet the needs of academics and students using institutional learning and teaching, course readings, and institutional intellectual capital systems. Issues with current operations and systems are discussed in terms of user behaviour. New repository systems are being designed in such a way that they address current service and user behaviour issues by closely aligning systems with user needs. By developing attractive online services, Griffith is working to change current user behaviour to achieve strategic priorities in the sharing and reuse of learning objects, improved selection and use of digitised course readings, the development of ePrint and eScience services, and the management of a research portfolio service.
Resumo:
This article considers the concept of media citizenship in relation to the digital strategies of the Special Broadcasting Service (SBS). At SBS, Australia’s multicultural public broadcaster, there is a critical appraisal of its strategies to harness user-created content (UCC) and social media to promote greater audience participation through its news and current affairs Web sites. The article looks at the opportunities and challenges that user-related content presents for public service media organizations as they consolidate multiplatform service delivery. Also analyzed are the implications of radio and television broadcasters’ moves to develop online services. It is proposed that case study methodologies enable an understanding of media citizenship to be developed that maintains a focus on the interaction between delivery technologies, organizational structures and cultures, and program content that is essential for understanding the changing focus of 21st-century public service media.
Resumo:
Social media digital and technologies surround us. We are moving into an age of ubiquitous (that is everywhere) computing. New media and information and communication technologies already impact on many aspects of everyday life including work, home and leisure. These new technologies are influencing the way that we develop social networks; understand places and location; how we navigate our cities; how we provide information about utilities and services; developing new ways to engage and participate in our communities, in planning, in governance and other decisions. This paper presents the initial findings of the impacts that digital communication technologies are having on public urban spaces. It develops a contextual review the nexus between urban planning and technological developments with examples and case studies from around the world to highlight some of the potential directions for urban planning in Queensland and Australia. It concludes with some thought provoking discussion points for urban planners, architects, designers and placemakers on the future of urban informatics and urban design, questions such as: how technology can enhance ‘place’, how technology can be used to improve public participation, and how technology will change our requirements of public places?
Resumo:
‘Digital storytelling’ is a workshop-based practice in which ‘ordinary’ people are taught to use digital media to create short audio-video stories, usually about their own lives. The idea is that this puts the universal human delight in narrative and self expression into the hands of everyone in the digital age; and potentially brings individual experience, ideas, creativity and imagination to the attention of the whole world. It gives a voice to the myriad tales of everyday life as experienced by ordinary people in their own terms. Despite its use of the latest technologies, its purpose is simple and human.
Resumo:
On 13 February 2008 Prime Minister Kevin Rudd made an apology to Australia’s Indigenous People on behalf of the Parliament of Australia. The State Library of Queensland, with assistance from Queensland University of Technology and Queensland’s Aboriginal and Torres Strait Islander communities, captured responses to this historic event in a collection of digital stories. Stories were created with: Tiga Bayles; Jeremy Robertson; Natalie Alberts; Sam Wagan Watson Jr; Nadine McDonald-Dowd; Anna Bligh; and Quentin Bryce.
Resumo:
There are at least four key challenges in the online news environment that computational journalism may address. Firstly, news providers operate in a rapidly evolving environment and larger businesses are typically slower to adapt to market innovations. News consumption patterns have changed and news providers need to find new ways to capture and retain digital users. Meanwhile, declining financial performance has led to cost cuts in mass market newspapers. Finally investigative reporting is typically slow, high cost and may be tedious, and yet is valuable to the reputation of a news provider. Computational journalism involves the application of software and technologies to the activities of journalism, and it draws from the fields of computer science, social science and communications. New technologies may enhance the traditional aims of journalism, or may require “a new breed of people who are midway between technologists and journalists” (Irfan Essa in Mecklin 2009: 3). Historically referred to as ‘computer assisted reporting’, the use of software in online reportage is increasingly valuable due to three factors: larger datasets are becoming publicly available; software is becoming sophisticated and ubiquitous; and the developing Australian digital economy. This paper introduces key elements of computational journalism – it describes why it is needed; what it involves; benefits and challenges; and provides a case study and examples. Computational techniques can quickly provide a solid factual basis for original investigative journalism and may increase interaction with readers, when correctly used. It is a major opportunity to enhance the delivery of original investigative journalism, which ultimately may attract and retain readers online.
Resumo:
This article considers the concept of media citizenship in relation to the digital strategies of the Special Broadcasting Service (SBS). At SBS, Australia’s multicultural public broadcaster, there is a critical appraisal of its strategies to harness user-created content (UCC) and social media to promote greater audience participation through its news and current affairs Web sites. The article looks at the opportunities and challenges that user-created content presents for public service media organizations as they consolidate multiplatform service delivery. Also analyzed are the implications of radio and television broadcasters’ moves to develop online services. It is proposed that case study methodologies enable an understanding of media citizenship to be developed that maintains a focus on the interaction between delivery technologies, organizational structures and cultures, and program content that is essential for understanding the changing focus of 21st-century public service media.
Resumo:
Live coding performances provide a context with particular demands and limitations for music making. In this paper we discuss how as the live coding duo aa-cell we have responded to these challenges, and what this experience has revealed about the computational representation of music and approaches to interactive computer music performance. In particular we have identified several effective and efficient processes that underpin our practice including probability, linearity, periodicity, set theory, and recursion and describe how these are applied and combined to build sophisticated musical structures. In addition, we outline aspects of our performance practice that respond to the improvisational, collaborative and communicative requirements of musical live coding.
Resumo:
The impact of Web 2.0 and social networking tools such as virtual communities, on education has been much commented on. The challenge for teachers is to embrace these new social networking tools and apply them to new educational contexts. The increasingly digitally-abled student cohorts and the need for educational applications of Web 2.0 are challenges that overwhelm many educators. This chapter will make three important contributions. Firstly it will explore the characteristics and behaviours of digitally-abled students enrolled in higher education. An innovation of this chapter will be the appli- cation of Bourdieu’s notions of capital, particularly social, cultural and digital capital to understand these characteristics. Secondly, it will present a possible use of a commonly used virtual community, Facebook©. Finally it will offer some advice for educators who are interested in using popular social networking communities, similar to Facebook©, in their teaching and learning.
Resumo:
This paper investigates the use of time-frequency techniques to assist in the estimation of power system modes which are resolvable by a Digital Fourier Transform (DFT). The limitations of linear estimation techniques in the presence of large disturbances which excite system non-linearities, particularly the swing equation non-linearity are shown. Where a nonlinearity manifests itself as time varying modal frequencies the Wigner-Ville Distribution (WVD) is used to describe the variation in modal frequencies and construct a window over which standard linear estimation techniques can be used. The error obtained even in the presence of multiple resolvable modes is better than 2%.