657 resultados para DIGITAL VIDEO
Resumo:
The Queensland University of Technology (QUT) allows the presentation of a thesis for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of seven published/submitted papers, of which one has been published, three accepted for publication and the other three are under review. This project is financially supported by an Australian Research Council (ARC) Discovery Grant with the aim of proposing strategies for the performance control of Distributed Generation (DG) system with digital estimation of power system signal parameters. Distributed Generation (DG) has been recently introduced as a new concept for the generation of power and the enhancement of conventionally produced electricity. Global warming issue calls for renewable energy resources in electricity production. Distributed generation based on solar energy (photovoltaic and solar thermal), wind, biomass, mini-hydro along with use of fuel cell and micro turbine will gain substantial momentum in the near future. Technically, DG can be a viable solution for the issue of the integration of renewable or non-conventional energy resources. Basically, DG sources can be connected to local power system through power electronic devices, i.e. inverters or ac-ac converters. The interconnection of DG systems to power system as a compensator or a power source with high quality performance is the main aim of this study. Source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, distortion at the point of common coupling in weak source cases, source current power factor, and synchronism of generated currents or voltages are the issues of concern. The interconnection of DG sources shall be carried out by using power electronics switching devices that inject high frequency components rather than the desired current. Also, noise and harmonic distortions can impact the performance of the control strategies. To be able to mitigate the negative effect of high frequency and harmonic as well as noise distortion to achieve satisfactory performance of DG systems, new methods of signal parameter estimation have been proposed in this thesis. These methods are based on processing the digital samples of power system signals. Thus, proposing advanced techniques for the digital estimation of signal parameters and methods for the generation of DG reference currents using the estimates provided is the targeted scope of this thesis. An introduction to this research – including a description of the research problem, the literature review and an account of the research progress linking the research papers – is presented in Chapter 1. One of the main parameters of a power system signal is its frequency. Phasor Measurement (PM) technique is one of the renowned and advanced techniques used for the estimation of power system frequency. Chapter 2 focuses on an in-depth analysis conducted on the PM technique to reveal its strengths and drawbacks. The analysis will be followed by a new technique proposed to enhance the speed of the PM technique while the input signal is free of even-order harmonics. The other techniques proposed in this thesis as the novel ones will be compared with the PM technique comprehensively studied in Chapter 2. An algorithm based on the concept of Kalman filtering is proposed in Chapter 3. The algorithm is intended to estimate signal parameters like amplitude, frequency and phase angle in the online mode. The Kalman filter is modified to operate on the output signal of a Finite Impulse Response (FIR) filter designed by a plain summation. The frequency estimation unit is independent from the Kalman filter and uses the samples refined by the FIR filter. The frequency estimated is given to the Kalman filter to be used in building the transition matrices. The initial settings for the modified Kalman filter are obtained through a trial and error exercise. Another algorithm again based on the concept of Kalman filtering is proposed in Chapter 4 for the estimation of signal parameters. The Kalman filter is also modified to operate on the output signal of the same FIR filter explained above. Nevertheless, the frequency estimation unit, unlike the one proposed in Chapter 3, is not segregated and it interacts with the Kalman filter. The frequency estimated is given to the Kalman filter and other parameters such as the amplitudes and phase angles estimated by the Kalman filter is taken to the frequency estimation unit. Chapter 5 proposes another algorithm based on the concept of Kalman filtering. This time, the state parameters are obtained through matrix arrangements where the noise level is reduced on the sample vector. The purified state vector is used to obtain a new measurement vector for a basic Kalman filter applied. The Kalman filter used has similar structure to a basic Kalman filter except the initial settings are computed through an extensive math-work with regards to the matrix arrangement utilized. Chapter 6 proposes another algorithm based on the concept of Kalman filtering similar to that of Chapter 3. However, this time the initial settings required for the better performance of the modified Kalman filter are calculated instead of being guessed by trial and error exercises. The simulations results for the parameters of signal estimated are enhanced due to the correct settings applied. Moreover, an enhanced Least Error Square (LES) technique is proposed to take on the estimation when a critical transient is detected in the input signal. In fact, some large, sudden changes in the parameters of the signal at these critical transients are not very well tracked by Kalman filtering. However, the proposed LES technique is found to be much faster in tracking these changes. Therefore, an appropriate combination of the LES and modified Kalman filtering is proposed in Chapter 6. Also, this time the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 7 proposes the other algorithm based on the concept of Kalman filtering similar to those of Chapter 3 and 6. However, this time an optimal digital filter is designed instead of the simple summation FIR filter. New initial settings for the modified Kalman filter are calculated based on the coefficients of the digital filter applied. Also, the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 8 uses the estimation algorithm proposed in Chapter 7 for the interconnection scheme of a DG to power network. Robust estimates of the signal amplitudes and phase angles obtained by the estimation approach are used in the reference generation of the compensation scheme. Several simulation tests provided in this chapter show that the proposed scheme can very well handle the source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, and synchronism of generated currents or voltages. The purposed compensation scheme also prevents distortion in voltage at the point of common coupling in weak source cases, balances the source currents, and makes the supply side power factor a desired value.
Resumo:
Player experience of spatiality in first-person, single-player games is informed by the maps and navigational aids provided by the game. This project uses textual analysis to examine the way these maps and navigational aids inform the experience of spatiality in Fallout 3, BioShock and BioShock 2. Spatiality is understood as trialectic, incorporating perceived, conceived and lived space, drawing on the work of Henri Lefebvre and Edward Soja. The most prominent elements of the games’ maps and navigational aids are analysed in terms of how they inform players’ experience of the games’ spaces. In particular this project examines the in-game maps these games incorporate, the waypoint navigation and fast-travel systems in Fallout 3, and the guide arrow and environmental cues in the BioShock games.
Resumo:
Journeys with Friends Truna aka J. Turner, Giselle Rosman and Matt Ditton Panel Session description: We are no longer an industry (alone) we are a sector. Where the model once consisted of industry making games, we now see the rise of a cultural sector playing in the game space – industry, indies (for whatever that distinction implies) artists (another odd distinction), individuals and well … everyone and their mums. This evolution has an affect – on audiences and who they are, what they expect and want, and how they understand the purpose and language of these “digital game forms’; how we talk about our worlds and the kinds of issues that are raised; on what we create and how we create it and on our communities and who we are. This evolution has an affect on how these works are understood within the wider social context and how we present this understanding to the next generation of makers and players. We can see the potential of this evolution from industry to sector in the rise of the Australian indie. We can see the potential fractures created by this evolution in the new voices that ask questions about diversity and social justice. And yet, we still see a ‘solution’ type reaction to the current changing state of our sector which announces the monolithic, Fordist model as desirable (albeit in smaller form) – with the subsequent ramifications for ‘training’ and production of local talent. Experts talk about a mismatch of graduate skills and industry needs, insufficient linkages between industry and education providers and the need to explore opportunity for the now passing model in new spaces such as adver-games and serious games. Head counts of Australian industry don’t recognise trans media producers as being part of their purview or opportunity, they don’t count the rise of the cultural playful game inspired creative works as one of thier team. Such perspectives are indeed relevant to the Australian Games Industry, but what about the emerging Australian Games Sector? How do we enable a future in such a space? This emerging sector is perhaps best represented by Melbourne’s Freeplay audience: a heady mix of indie developers, players, artists, critical thinkers and industry. Such audiences are no longer content with an ‘industry’ alone; they are the community who already see themselves as an important, vibrant cultural sector. Part of the discussion presented here seeks to identify and understand the resources, primarily in the context of community and educational opportunities, available to the evolving sector now relying more on the creative processes. This creative process and community building is already visibly growing within the context of smaller development studios, often involving more multiskilling production methodologies where the definition of ‘game’ clearly evolves beyond the traditional one.
Resumo:
This chapter describes how, as YouTube has scaled up both as a platform and as a company, its business model and the consequences for its copyright regulation strategies have co-evolved, and so too the boundaries between amateur and professional media have shifted and blurred in particular ways. As YouTube, Inc moves to more profitably arrange and stabilise the historically contentious relations among rights-holders, uploaders, advertisers and audiences, some forms of amateur video production have become institutionalised and professionalised, while others have been further marginalised and driven underground or to other, more forgiving, platforms.
Resumo:
In this article I would like to examine the promise and possibilities of music, digital media and National Broadband Network. I will do this based on concepts that have emerged from a study undertaken by Professor Andrew Brown and I that categorise technologies into what we term representational technologies and technologies with agency
Resumo:
Advances in information and communication technologies have brought about an information revolution, leading to fundamental changes in the way information is collected or generated, shared and distributed. The internet and digital technologies are re-shaping research, innovation and creativity. Economic research has highlighted the importance of information flows and the availability of information for access and re-use. Information is crucial to the efficiency of markets and enhanced information flows promote creativity, innovation and productivity. There is a rapidly expanding body of literature which supports the economic and social benefits of enabling access to and re-use of public sector information.1 (Note that a substantial research project associated with QUT’s Intellectual Property: Knowledge, Culture and Economy (IPKCE) Research Program is engaged in a comprehensive study and analysis of the literature on the economics of access to public sector information.)
Resumo:
Learning a digital tool is often a hidden process. We tend to learn new tools in a bewildering range of ways. Formal, informal, structured, random, conscious, unconscious, individual, group strategies, may all play a part, but are often lost to us in the complex and demanding processes of learning. But when we reflect carefully on the experience, some patterns and surprising techniques emerge. This monograph presents the thinking of four students in MDN642, Digital Pedagogies, where they have deliberately reflected on the mental processes at work as they learnt a digital technology of their choice.
Resumo:
These digital stories were produced during a commercial research project with SLQ and Flying Arts. The works build on the research of Klaebe and Burgess related to variable workshop scenarios, and the institutional contexts of co-creative media. In this instance, research focused on the distributed digital storytelling workshop model; and the development of audiences for digital storytelling. The research team worked with regional artists whose work had been selected for inclusion in the Five Senses exhibition held at the State Library of Queensland to produce stories about their work; these works were then in turn integrated into the physical exhibition space. Location remoteness and timeline were factors in how the stories were made in a mix of individual meetings and remote correspondence (email &phone).
Resumo:
Historically, determining the country of origin of a published work presented few challenges, because works were generally published physically – whether in print or otherwise – in a distinct location or few locations. However, publishing opportunities presented by new technologies mean that we now live in a world of simultaneous publication – works that are first published online are published simultaneously to every country in world in which there is Internet connectivity. While this is certainly advantageous for the dissemination and impact of information and creative works, it creates potential complications under the Berne Convention for the Protection of Literary and Artistic Works (“Berne Convention”), an international intellectual property agreement to which most countries in the world now subscribe. Under the Berne Convention’s national treatment provisions, rights accorded to foreign copyright works may not be subject to any formality, such as registration requirements (although member countries are free to impose formalities in relation to domestic copyright works). In Kernel Records Oy v. Timothy Mosley p/k/a Timbaland, et al. however, the Florida Southern District Court of the United States ruled that first publication of a work on the Internet via an Australian website constituted “simultaneous publication all over the world,” and therefore rendered the work a “United States work” under the definition in section 101 of the U.S. Copyright Act, subjecting the work to registration formality under section 411. This ruling is in sharp contrast with an earlier decision delivered by the Delaware District Court in Håkan Moberg v. 33T LLC, et al. which arrived at an opposite conclusion. The conflicting rulings of the U.S. courts reveal the problems posed by new forms of publishing online and demonstrate a compelling need for further harmonization between the Berne Convention, domestic laws and the practical realities of digital publishing. In this article, we argue that even if a work first published online can be considered to be simultaneously published all over the world it does not follow that any country can assert itself as the “country of origin” of the work for the purpose of imposing domestic copyright formalities. More specifically, we argue that the meaning of “United States work” under the U.S. Copyright Act should be interpreted in line with the presumption against extraterritorial application of domestic law to limit its application to only those works with a real and substantial connection to the United States. There are gaps in the Berne Convention’s articulation of “country of origin” which provide scope for judicial interpretation, at a national level, of the most pragmatic way forward in reconciling the goals of the Berne Convention with the practical requirements of domestic law. We believe that the uncertainties arising under the Berne Convention created by new forms of online publishing can be resolved at a national level by the sensible application of principles of statutory interpretation by the courts. While at the international level we may need a clearer consensus on what amounts to “simultaneous publication” in the digital age, state practice may mean that we do not yet need to explore textual changes to the Berne Convention.
Resumo:
The Making Design and Analysing Interaction track at the Participatory Innovation Conference calls for submissions from ‘Makers’ who will contribute examples of participatory innovation activities documented in video and ‘Analysts’ who will analyse those examples of participatory innovation activity. The aim of this paper is to open up for a discussion within the format of the track of the roles that designers could play in analysing the participatory innovation activities of others and to provide a starting point for this discussion through a concrete example of such ‘designerly analysis’. Designerly analysis opens new analytic frames for understanding participatory innovation and contributes to our understanding of design activities.
Resumo:
Google, Facebook, Twitter, LinkedIn, etc. are some of the prominent large-scale digital service providers that are having tremendous impact on societies, corporations and individuals. However, despite the rapid uptake and their obvious influence on the behavior of individuals and the business models and networks of organizations, we still lack a deeper, theory-guided understanding of the related phenomenon. We use Teece’s notion of complementary assets and extend it towards ‘digital complementary assets’ (DCA) in an attempt to provide such a theory-guided understanding of these digital services. Building on Teece’s theory, we make three contributions. First, we offer a new conceptualization of digital complementary assets in the form of digital public goods and digital public assets. Second, we differentiate three models for how organizations can engage with such digital complementary assets. Third, user-base is found to be a critical factor when considering appropriability.
Resumo:
The exhibition consists of a series of 9 large-scale cotton rag prints, printed from digital files, and a sound and picture animation on DVD composed of drawings, sound, analogue and digital photographs, and Super 8 footage. The exhibition represents the artist’s experience of Singapore during her residency. Source imagery was gathered from photographs taken at the Bukit Brown abandoned Chinese Cemetery in Singapore, and Australian native gardens in Parkville Melbourne. Historical sources include re-photographed Singapore 19th and early 20th century postcard images. The works use analogue, hand-drawn and digital imaging, still and animated, to explore the digital interface’s ability to combine mixed media. This practice stems from the digital imaging practice of layering, using various media editing software. The work is innovative in that it stretches the idea of the layer composition in a single image by setting each layer into motion using animation techniques. This creates a multitude of permutations and combinations as the two layers move in different rhythmic patterns. The work also represents an innovative collaboration between the photographic practitioner and a sound composer, Duncan King-Smith, who designed sound for the animation based on concepts of trance, repetition and abstraction. As part of the Art ConneXions program, the work travelled to numerous international venues including: Space 217 Singapore, RMIT Gallery Melbourne, National Museum Jakarta, Vietnam Fine Arts Museum Hanoi, and ifa (Institut fur Auslandsbeziehungen) Gallery in both Stuttgart and Berlin.
Resumo:
The design of artificial intelligence in computer games is an important component of a player's game play experience. As games are becoming more life-like and interactive, the need for more realistic game AI will increase. This is particularly the case with respect to AI that simulates how human players act, behave and make decisions. The purpose of this research is to establish a model of player-like behavior that may be effectively used to inform the design of artificial intelligence to more accurately mimic a player's decision making process. The research uses a qualitative analysis of player opinions and reactions while playing a first person shooter video game, with recordings of their in game actions, speech and facial characteristics. The initial studies provide player data that has been used to design a model of how a player behaves.
Resumo:
Purpose – The purpose of this paper is to examine the use of short video tutorials in a post-graduate accounting subject, as a means of helping students develop and enhance independent learning skills. Design/methodology/approach – In total, five short (approximately five to 10 minutes) video tutorials were introduced in an effort to shift the reliance for learning from the lecturer to the student. Data on students’ usage of online video tutorials, and comments by students in university questionnaires were collated over three semesters from 2008 to 2009. Interviews with students were then conducted in late 2009 to more comprehensively evaluate the use and perceived benefits of video tutorials. Findings – Findings reveal preliminary but positive outcomes in terms of both more efficient and effective teaching and learning. Research limitations/implications – The shift towards more independent learning through the use of video tutorials has positive implications for educators, employers, and professional accounting bodies; each of whom has identified the need for this skill in accounting graduates. Practical implications – The use of video tutorials has the potential for more rewarding teaching and more effective learning. Originality/value – This study is one of the first to examine the use and benefits of video tutorials as a means of developing independent learning skills in accountancy students – addressing a key concern within the profession.
Resumo:
Since users have become the focus of product/service design in last decade, the term User eXperience (UX) has been frequently used in the field of Human-Computer-Interaction (HCI). Research on UX facilitates a better understanding of the various aspects of the user’s interaction with the product or service. Mobile video, as a new and promising service and research field, has attracted great attention. Due to the significance of UX in the success of mobile video (Jordan, 2002), many researchers have centered on this area, examining users’ expectations, motivations, requirements, and usage context. As a result, many influencing factors have been explored (Buchinger, Kriglstein, Brandt & Hlavacs, 2011; Buchinger, Kriglstein & Hlavacs, 2009). However, a general framework for specific mobile video service is lacking for structuring such a great number of factors. To measure user experience of multimedia services such as mobile video, quality of experience (QoE) has recently become a prominent concept. In contrast to the traditionally used concept quality of service (QoS), QoE not only involves objectively measuring the delivered service but also takes into account user’s needs and desires when using the service, emphasizing the user’s overall acceptability on the service. Many QoE metrics are able to estimate the user perceived quality or acceptability of mobile video, but may be not enough accurate for the overall UX prediction due to the complexity of UX. Only a few frameworks of QoE have addressed more aspects of UX for mobile multimedia applications but need be transformed into practical measures. The challenge of optimizing UX remains adaptations to the resource constrains (e.g., network conditions, mobile device capabilities, and heterogeneous usage contexts) as well as meeting complicated user requirements (e.g., usage purposes and personal preferences). In this chapter, we investigate the existing important UX frameworks, compare their similarities and discuss some important features that fit in the mobile video service. Based on the previous research, we propose a simple UX framework for mobile video application by mapping a variety of influencing factors of UX upon a typical mobile video delivery system. Each component and its factors are explored with comprehensive literature reviews. The proposed framework may benefit in user-centred design of mobile video through taking a complete consideration of UX influences and in improvement of mobile videoservice quality by adjusting the values of certain factors to produce a positive user experience. It may also facilitate relative research in the way of locating important issues to study, clarifying research scopes, and setting up proper study procedures. We then review a great deal of research on UX measurement, including QoE metrics and QoE frameworks of mobile multimedia. Finally, we discuss how to achieve an optimal quality of user experience by focusing on the issues of various aspects of UX of mobile video. In the conclusion, we suggest some open issues for future study.