956 resultados para Direct digital detector
Resumo:
Twitter is now well established as the world’s second most important social media platform, after Facebook. Its 140-character updates are designed for brief messaging, and its network structures are kept relatively flat and simple: messages from users are either public and visible to all (even to unregistered visitors using the Twitter website), or private and visible only to approved ‘followers’ of the sender; there are no more complex definitions of degrees of connection (family, friends, friends of friends) as they are available in other social networks. Over time, Twitter users have developed simple, but effective mechanisms for working around these limitations: ‘#hashtags’, which enable the manual or automatic collation of all tweets containing the same #hashtag, as well allowing users to subscribe to content feeds that contain only those tweets which feature specific #hashtags; and ‘@replies’, which allow senders to direct public messages even to users whom they do not already follow. This paper documents a methodology for extracting public Twitter activity data around specific #hashtags, and for processing these data in order to analyse and visualize the @reply networks existing between participating users – both overall, as a static network, and over time, to highlight the dynamic structure of @reply conversations. Such visualizations enable us to highlight the shifting roles played by individual participants, as well as the response of the overall #hashtag community to new stimuli – such as the entry of new participants or the availability of new information. Over longer timeframes, it is also possible to identify different phases in the overall discussion, or the formation of distinct clusters of preferentially interacting participants.
Resumo:
Learning a digital tool is often a hidden process. We tend to learn new tools in a bewildering range of ways. Formal, informal, structured, random, conscious, unconscious, individual, group strategies, may all play a part, but are often lost to us in the complex and demanding processes of learning. But when we reflect carefully on the experience, some patterns and surprising techniques emerge. This monograph presents the thinking of four students in MDN642, Digital Pedagogies, where they have deliberately reflected on the mental processes at work as they learnt a digital technology of their choice.
Resumo:
These digital stories were produced during a commercial research project with SLQ and Flying Arts. The works build on the research of Klaebe and Burgess related to variable workshop scenarios, and the institutional contexts of co-creative media. In this instance, research focused on the distributed digital storytelling workshop model; and the development of audiences for digital storytelling. The research team worked with regional artists whose work had been selected for inclusion in the Five Senses exhibition held at the State Library of Queensland to produce stories about their work; these works were then in turn integrated into the physical exhibition space. Location remoteness and timeline were factors in how the stories were made in a mix of individual meetings and remote correspondence (email &phone).
Resumo:
Historically, determining the country of origin of a published work presented few challenges, because works were generally published physically – whether in print or otherwise – in a distinct location or few locations. However, publishing opportunities presented by new technologies mean that we now live in a world of simultaneous publication – works that are first published online are published simultaneously to every country in world in which there is Internet connectivity. While this is certainly advantageous for the dissemination and impact of information and creative works, it creates potential complications under the Berne Convention for the Protection of Literary and Artistic Works (“Berne Convention”), an international intellectual property agreement to which most countries in the world now subscribe. Under the Berne Convention’s national treatment provisions, rights accorded to foreign copyright works may not be subject to any formality, such as registration requirements (although member countries are free to impose formalities in relation to domestic copyright works). In Kernel Records Oy v. Timothy Mosley p/k/a Timbaland, et al. however, the Florida Southern District Court of the United States ruled that first publication of a work on the Internet via an Australian website constituted “simultaneous publication all over the world,” and therefore rendered the work a “United States work” under the definition in section 101 of the U.S. Copyright Act, subjecting the work to registration formality under section 411. This ruling is in sharp contrast with an earlier decision delivered by the Delaware District Court in Håkan Moberg v. 33T LLC, et al. which arrived at an opposite conclusion. The conflicting rulings of the U.S. courts reveal the problems posed by new forms of publishing online and demonstrate a compelling need for further harmonization between the Berne Convention, domestic laws and the practical realities of digital publishing. In this article, we argue that even if a work first published online can be considered to be simultaneously published all over the world it does not follow that any country can assert itself as the “country of origin” of the work for the purpose of imposing domestic copyright formalities. More specifically, we argue that the meaning of “United States work” under the U.S. Copyright Act should be interpreted in line with the presumption against extraterritorial application of domestic law to limit its application to only those works with a real and substantial connection to the United States. There are gaps in the Berne Convention’s articulation of “country of origin” which provide scope for judicial interpretation, at a national level, of the most pragmatic way forward in reconciling the goals of the Berne Convention with the practical requirements of domestic law. We believe that the uncertainties arising under the Berne Convention created by new forms of online publishing can be resolved at a national level by the sensible application of principles of statutory interpretation by the courts. While at the international level we may need a clearer consensus on what amounts to “simultaneous publication” in the digital age, state practice may mean that we do not yet need to explore textual changes to the Berne Convention.
Resumo:
The majority of the world’s population now lives in cities (United Nations, 2008) resulting in an urban densification requiring people to live in closer proximity and share urban infrastructure such as streets, public transport, and parks within cities. However, “physical closeness does not mean social closeness” (Wellman, 2001, p. 234). Whereas it is a common practice to greet and chat with people you cross paths with in smaller villages, urban life is mainly anonymous and does not automatically come with a sense of community per se. Wellman (2001, p. 228) defines community “as networks of interpersonal ties that provide sociability, support, information, a sense of belonging and social identity.” While on the move or during leisure time, urban dwellers use their interactive information communication technology (ICT) devices to connect to their spatially distributed community while in an anonymous space. Putnam (1995) argues that available technology privatises and individualises the leisure time of urban dwellers. Furthermore, ICT is sometimes used to build a “cocoon” while in public to avoid direct contact with collocated people (Mainwaring et al., 2005; Bassoli et al., 2007; Crawford, 2008). Instead of using ICT devices to seclude oneself from the surrounding urban environment and the collocated people within, such devices could also be utilised to engage urban dwellers more with the urban environment and the urban dwellers within. Urban sociologists found that “what attracts people most, it would appear, is other people” (Whyte, 1980, p. 19) and “people and human activity are the greatest object of attention and interest” (Gehl, 1987, p. 31). On the other hand, sociologist Erving Goffman describes the concept of civil inattention, acknowledging strangers’ presence while in public but not interacting with them (Goffman, 1966). With this in mind, it appears that there is a contradiction between how people are using ICT in urban public places and for what reasons and how people use public urban places and how they behave and react to other collocated people. On the other hand there is an opportunity to employ ICT to create and influence experiences of people collocated in public urban places. The widespread use of location aware mobile devices equipped with Internet access is creating networked localities, a digital layer of geo-coded information on top of the physical world (Gordon & de Souza e Silva, 2011). Foursquare.com is an example of a location based 118 Mobile Multimedia – User and Technology Perspectives social network (LBSN) that enables urban dwellers to virtually check-in into places at which they are physically present in an urban space. Users compete over ‘mayorships’ of places with Foursquare friends as well as strangers and can share recommendations about the space. The research field of Urban Informatics is interested in these kinds of digital urban multimedia augmentations and how such augmentations, mediated through technology, can create or influence the UX of public urban places. “Urban informatics is the study, design, and practice of urban experiences across different urban contexts that are created by new opportunities of real-time, ubiquitous technology and the augmentation that mediates the physical and digital layers of people networks and urban infrastructures” (Foth et al., 2011, p. 4). One possibility to augment the urban space is to enable citizens to digitally interact with spaces and urban dwellers collocated in the past, present, and future. “Adding digital layer to the existing physical and social layers could facilitate new forms of interaction that reshape urban life” (Kjeldskov & Paay, 2006, p. 60). This methodological chapter investigates how the design of UX through such digital placebased mobile multimedia augmentations can be guided and evaluated. First, we describe three different applications that aim to create and influence the urban UX through mobile mediated interactions. Based on a review of literature, we describe how our integrated framework for designing and evaluating urban informatics experiences has been constructed. We conclude the chapter with a reflective discussion on the proposed framework.
Resumo:
Google, Facebook, Twitter, LinkedIn, etc. are some of the prominent large-scale digital service providers that are having tremendous impact on societies, corporations and individuals. However, despite the rapid uptake and their obvious influence on the behavior of individuals and the business models and networks of organizations, we still lack a deeper, theory-guided understanding of the related phenomenon. We use Teece’s notion of complementary assets and extend it towards ‘digital complementary assets’ (DCA) in an attempt to provide such a theory-guided understanding of these digital services. Building on Teece’s theory, we make three contributions. First, we offer a new conceptualization of digital complementary assets in the form of digital public goods and digital public assets. Second, we differentiate three models for how organizations can engage with such digital complementary assets. Third, user-base is found to be a critical factor when considering appropriability.
Resumo:
Virtual environments can provide, through digital games and online social interfaces, extremely exciting forms of interactive entertainment. Because of their capability in displaying and manipulating information in natural and intuitive ways, such environments have found extensive applications in decision support, education and training in the health and science domains amongst others. Currently, the burden of validating both the interactive functionality and visual consistency of a virtual environment content is entirely carried out by developers and play-testers. While considerable research has been conducted in assisting the design of virtual world content and mechanics, to date, only limited contributions have been made regarding the automatic testing of the underpinning graphics software and hardware. The aim of this thesis is to determine whether the correctness of the images generated by a virtual environment can be quantitatively defined, and automatically measured, in order to facilitate the validation of the content. In an attempt to provide an environment-independent definition of visual consistency, a number of classification approaches were developed. First, a novel model-based object description was proposed in order to enable reasoning about the color and geometry change of virtual entities during a play-session. From such an analysis, two view-based connectionist approaches were developed to map from geometry and color spaces to a single, environment-independent, geometric transformation space; we used such a mapping to predict the correct visualization of the scene. Finally, an appearance-based aliasing detector was developed to show how incorrectness too, can be quantified for debugging purposes. Since computer games heavily rely on the use of highly complex and interactive virtual worlds, they provide an excellent test bed against which to develop, calibrate and validate our techniques. Experiments were conducted on a game engine and other virtual worlds prototypes to determine the applicability and effectiveness of our algorithms. The results show that quantifying visual correctness in virtual scenes is a feasible enterprise, and that effective automatic bug detection can be performed through the techniques we have developed. We expect these techniques to find application in large 3D games and virtual world studios that require a scalable solution to testing their virtual world software and digital content.
Resumo:
In most of the digital image watermarking schemes, it becomes a common practice to address security in terms of robustness, which is basically a norm in cryptography. Such consideration in developing and evaluation of a watermarking scheme may severely affect the performance and render the scheme ultimately unusable. This paper provides an explicit theoretical analysis towards watermarking security and robustness in figuring out the exact problem status from the literature. With the necessary hypotheses and analyses from technical perspective, we demonstrate the fundamental realization of the problem. Finally, some necessary recommendations are made for complete assessment of watermarking security and robustness.
Resumo:
In 2009, QUT’s Office of Research and the Institute for Adult Learning Singapore funded a six-month pilot project that represented the first stage of a larger international comparative study. The study is the first of its kind to investigate to what extent and how digital content workers’ learning needs are being met by adult education and training in Australia and Singapore. The pilot project involved consolidating key theoretical literature, studies, policies, programs and statistical data relevant to the digital content industries in Australia and Singapore. This had not been done before, and represented new knowledge generation. Digital content workers include professionals within and beyond the creative industries as follows: Visual effects and animation (including virtual reality and 3D products); Interactive multimedia (e.g. websites, CD-ROMs) and software development; Computer and online games; and Digital film & TV production and film & TV post-production. In the last decade, the digital content industries have been recognised as an industry sector of strong and increasing significance. The project compared Australia and Singapore on aspects of the digital content industries’ labour market, skill requirements, human capital challenges, the role of adult education in building a workforce for the digital content industries, and innovation policies. The consolidated report generated from the project formed the basis of the proposal for an ARC Linkage Project application submitted in the May 2010 round.
Resumo:
Finite Element Modeling (FEM) has become a vital tool in the automotive design and development processes. FEM of the human body is a technique capable of estimating parameters that are difficult to measure in experimental studies with the human body segments being modeled as complex and dynamic entities. Several studies have been dedicated to attain close-to-real FEMs of the human body (Pankoke and Siefert 2007; Amann, Huschenbeth et al. 2009; ESI 2010). The aim of this paper is to identify and appraise the state of-the art models of the human body which incorporate detailed pelvis and/or lower extremity models. Six databases and search engines were used to obtain literature, and the search was limited to studies published in English since 2000. The initial search results identified 636 pelvis-related papers, 834 buttocks-related papers, 505 thigh-related papers, 927 femur-related papers, 2039 knee-related papers, 655 shank-related papers, 292 tibia-related papers, 110 fibula-related papers, 644 ankle related papers, and 5660 foot-related papers. A refined search returned 100 pelvis-related papers, 45 buttocks related papers, 65 thigh-related papers, 162 femur-related papers, 195 kneerelated papers, 37 shank-related papers, 80 tibia-related papers, 30 fibula-related papers and 102 ankle-related papers and 246 foot-related papers. The refined literature list was further restricted by appraisal against a modified LOW appraisal criteria. Studies with unclear methodologies, with a focus on populations with pathology or with sport related dynamic motion modeling were excluded. The final literature list included fifteen models and each was assessed against the percentile the model represents, the gender the model was based on, the human body segment/segments included in the model, the sample size used to develop the model, the source of geometric/anthropometric values used to develop the model, the posture the model represents and the finite element solver used for the model. The results of this literature review provide indication of bias in the available models towards 50th percentile male modeling with a notable concentration on the pelvis, femur and buttocks segments.
Resumo:
Digital human modelling (DHM) has today matured from research into industrial application. In the automotive domain, DHM has become a commonly used tool in virtual prototyping and human-centred product design. While this generation of DHM supports the ergonomic evaluation of new vehicle design during early design stages of the product, by modelling anthropometry, posture, motion or predicting discomfort, the future of DHM will be dominated by CAE methods, realistic 3D design, and musculoskeletal and soft tissue modelling down to the micro-scale of molecular activity within single muscle fibres. As a driving force for DHM development, the automotive industry has traditionally used human models in the manufacturing sector (production ergonomics, e.g. assembly) and the engineering sector (product ergonomics, e.g. safety, packaging). In product ergonomics applications, DHM share many common characteristics, creating a unique subset of DHM. These models are optimised for a seated posture, interface to a vehicle seat through standardised methods and provide linkages to vehicle controls. As a tool, they need to interface with other analytic instruments and integrate into complex CAD/CAE environments. Important aspects of current DHM research are functional analysis, model integration and task simulation. Digital (virtual, analytic) prototypes or digital mock-ups (DMU) provide expanded support for testing and verification and consider task-dependent performance and motion. Beyond rigid body mechanics, soft tissue modelling is evolving to become standard in future DHM. When addressing advanced issues beyond the physical domain, for example anthropometry and biomechanics, modelling of human behaviours and skills is also integrated into DHM. Latest developments include a more comprehensive approach through implementing perceptual, cognitive and performance models, representing human behaviour on a non-physiologic level. Through integration of algorithms from the artificial intelligence domain, a vision of the virtual human is emerging.
Resumo:
This paper draws on a larger study of the uses of Australian user-created content and online social networks to examine the relationships between professional journalists and highly engaged Australian users of political media within the wider media ecology, with a particular focus on Twitter. It uses an analysis of topic based conversation networks using the #ausvotes hashtag on Twitter around the 2010 federal election to explore the key themes and issues addressed by this Twitter community during the campaign, and finds that Twitter users were largely commenting on the performance of mainstream media and politicians rather than engaging in direct political discussion. The often critical attitude of Twitter users towards the political establishment mirrors the approach of news and political bloggers to political actors, nearly a decade earlier, but the increasing adoption of Twitter as a communication tool by politicians, journalists, and everyday users alike makes a repetition of the polarisation experienced at that time appear unlikely.
Resumo:
The participation of the community broadcasting sector in the development of digital radio provides a potentially valuable opportunity for non-market, end user-driven experimentation in the development of these new services in Australia. However this development path is constrained by various factors, some of which are specific to the community broadcasting sector and others that are generic to the broader media and communications policy, industrial and technological context. This paper filters recent developments in digital radio policy and implementation through the perspectives of community radio stakeholders, obtained through interviews, to describe and analyse these constraints. The early stage of digital community radio presented here is intended as a baseline for tracking the development of the sector as digital radio broadcasting develops. We also draw upon insights from scholarly debates about citizens media and participatory culture to identify and discuss two sets of opportunities for social benefit that are enabled by the inclusion of community radio in digital radio service development. The first arises from community broadcasting’s involvement in the propagation of the multi-literacies that drive new digital economies, not only through formal and informal multi- and trans-media training, but also in the ‘co-creative’ forms of collaborative and participatory media production that are fostered in the sector. The second arises from the fact that community radio is uniquely placed — indeed charged with the responsibility — to facilitate social participation in the design and operation of media institutions themselves, not just their service outputs.
Resumo:
Digital Disruption: Cinema Moves On-line helps to make sense of what has happened in the short but turbulent history of on-line moving image distribution. It provides a realistic assessment of both the genuine and the not-so-promising methods that have been experimented in response to the disruptions that moving from ‘analogue dollars’ to ‘digital cents’ have provoked in the film industry. Paying close attention to how the Majors have dealt – often unsuccessfully – with the challenges it poses, it also focuses closely on the innovations and practices that have taken place beyond the mainstream, showcasing important entrepreneurial innovations such as Mubi, Jaman, Withoutabox and IMDb. Written by leading academic commentators and experts close to the fluctuating fortunes of the industry, Digital Disruption: Cinema Moves On-line is an indispensable guide to the changes currently facing film and its audiences.
Resumo:
Language-use has proven to be the most complex and complicating of all Internet features, yet people and institutions invest enormously in language and crosslanguage features because they are fundamental to the success of the Internet’s past, present and future. The thesis takes into focus the developments of the latter – features that facilitate and signify linking between or across languages – both in their historical and current contexts. In the theoretical analysis, the conceptual platform of inter-language linking is developed to both accommodate efforts towards a new social complexity model for the co-evolution of languages and language content, as well as to create an open analytical space for language and cross-language related features of the Internet and beyond. The practiced uses of inter-language linking have changed over the last decades. Before and during the first years of the WWW, mechanisms of inter-language linking were at best important elements used to create new institutional or content arrangements, but on a large scale they were just insignificant. This has changed with the emergence of the WWW and its development into a web in which content in different languages co-evolve. The thesis traces the inter-language linking mechanisms that facilitated these dynamic changes by analysing what these linking mechanisms are, how their historical as well as current contexts can be understood and what kinds of cultural-economic innovation they enable and impede. The study discusses this alongside four empirical cases of bilingual or multilingual media use, ranging from television and web services for languages of smaller populations, to large-scale, multiple languages involving web ventures by the British Broadcasting Corporation, the Special Broadcasting Service Australia, Wikipedia and Google. To sum up, the thesis introduces the concepts of ‘inter-language linking’ and the ‘lateral web’ to model the social complexity and co-evolution of languages online. The resulting model reconsiders existing social complexity models in that it is the first that can explain the emergence of large-scale, networked co-evolution of languages and language content facilitated by the Internet and the WWW. Finally, the thesis argues that the Internet enables an open space for language and crosslanguage related features and investigates how far this process is facilitated by (1) amateurs and (2) human-algorithmic interaction cultures.