143 resultados para digital terrain analysis
Resumo:
The authors present a Cause-Effect fault diagnosis model, which utilises the Root Cause Analysis approach and takes into account the technical features of a digital substation. The Dempster/Shafer evidence theory is used to integrate different types of fault information in the diagnosis model so as to implement a hierarchical, systematic and comprehensive diagnosis based on the logic relationship between the parent and child nodes such as transformer/circuit-breaker/transmission-line, and between the root and child causes. A real fault scenario is investigated in the case study to demonstrate the developed approach in diagnosing malfunction of protective relays and/or circuit breakers, miss or false alarms, and other commonly encountered faults at a modern digital substation.
Resumo:
In 2010, the State Library of Queensland (SLQ) donated their out-of-copyright Queensland images to Wikimedia Commons. One direct effect of publishing the collections at Wikimedia Commons is the ability of general audiences to participate and help the library in processing the images in the collection. This paper will discuss a project that explored user participation in the categorisation of the State Library of Queensland digital image collections. The outcomes of this project can be used to gain a better understanding of user participation that lead to improving access to library digital collections. Two techniques for data collection were used: documents analysis and interview. Document analysis was performed on the Wikimedia Commons monthly reports. Meanwhile, interview was used as the main data collection technique in this research. The data collected from document analysis was used to help the researchers to devise appropriate questions for interviews. The interviews were undertaken with participants who were divided into two groups: SLQ staff members and Wikimedians (users who participate in Wikimedia). The two sets of data collected from participants were analysed independently and compared. This method was useful for the researchers to understand the differences between the experiences of categorisation from both the librarians’ and the users’ perspectives. This paper will provide a discussion on the preliminary findings that have emerged from each group participant. This research provides preliminary information about the extent of user participation in the categorisation of SLQ collections in Wikimedia Commons that can be used by SLQ and other interested libraries in describing their digital content by their categorisations to improve user access to the collection in the future.
Resumo:
The automotive industry has been the focus of digital human modeling (DHM) research and application for many years. In the highly competitive marketplace for personal transportation, the desire to improve the customer’s experience has driven extensive research in both the physical and cognitive interaction between the vehicle and its occupants. Human models provide vehicle designers with tools to view and analyze product interactions before the first prototypes are built, potentially improving the design while reducing cost and development time. The focus of DHM research and applications began with prediction and representation of static postures for purposes of driver workstation layout, including assessments of seat adjustment ranges and exterior vision. Now DHMs are used for seat design and assessment of driver reach and ingress/egress. DHMs and related simulation tools are expanding into the cognitive domain, with computational models of perception and motion, and into the dynamic domain with models of physical responses to ride and vibration. Moreover, DHMs are now widely used to analyze the ergonomics of vehicle assembly tasks. In this case, the analysis aims to determine whether workers can be expected to complete the tasks safely and with good quality. This preface provides a review of the literature to provide context for the nine new papers presented in this special issue.
Resumo:
Process bus networks are the next stage in the evolution of substation design, bringing digital technology to the high voltage switchyard. Benefits of process buses include facilitating the use of Non-Conventional Instrument Transformers, improved disturbance recording and phasor measurement and the removal of costly, and potentially hazardous, copper cabling from substation switchyards and control rooms. This paper examines the role a process bus plays in an IEC 61850 based Substation Automation System. Measurements taken from a process bus substation are used to develop an understanding of the network characteristics of "whole of substation" process buses. The concept of "coherent transmission" is presented and the impact of this on Ethernet switches is examined. Experiments based on substation observations are used to investigate in detail the behavior of Ethernet switches with sampled value traffic. Test methods that can be used to assess the adequacy of a network are proposed, and examples of the application and interpretation of these tests are provided. Once sampled value frames are queued by an Ethernet switch the additional delay incurred by subsequent switches is minimal, and this allows their use in switchyards to further reduce communications cabling, without significantly impacting operation. The performance and reliability of a process bus network operating with close to the theoretical maximum number of digital sampling units (merging units or electronic instrument transformers) was investigated with networking equipment from several vendors, and has been demonstrated to be acceptable.
Resumo:
An ambitious rendering of the digital future from a pioneer of media and cultural studies, a wise and witty take on a changing field, and our orientation to it Investigates the uses of multimedia by creative and productive citizen-consumers to provide new theories of communication that accommodate social media, participatory action, and user-creativity Leads the way for new interdisciplinary engagement with systems thinking, complexity and evolutionary sciences, and the convergence of cultural and economic values Analyzes the historical uses of multimedia from print, through broadcasting to the internet Combines conceptual innovation with historical erudition to present a high-level synthesis of ideas and detailed analysis of emergent forms and practices Features an international focus and global reach to provide a basis for students and researchers seeking broader perspectives
Resumo:
This paper describes the use of property graphs for mapping data between AEC software tools, which are not linked by common data formats and/or other interoperability measures. The intention of introducing this in practice, education and research is to facilitate the use of diverse, non-integrated design and analysis applications by a variety of users who need to create customised digital workflows, including those who are not expert programmers. Data model types are examined by way of supporting the choice of directed, attributed, multi-relational graphs for such data transformation tasks. A brief exemplar design scenario is also presented to illustrate the concepts and methods proposed, and conclusions are drawn regarding the feasibility of this approach and directions for further research.
Resumo:
Twitter is an important and influential social media platform, but much research into its uses remains centred around isolated cases – e.g. of events in political communication, crisis communication, or popular culture, often coordinated by shared hashtags (brief keywords, prefixed with the symbol ‘#’). In particular, a lack of standard metrics for comparing communicative patterns across cases prevents researchers from developing a more comprehensive perspective on the diverse, sometimes crucial roles which hashtags play in Twitter-based communication. We address this problem by outlining a catalogue of widely applicable, standardised metrics for analysing Twitter-based communication, with particular focus on hashtagged exchanges. We also point to potential uses for such metrics, presenting an indication of what broader comparisons of diverse cases can achieve.
Resumo:
Before e-Technology’s effects on users can be accurately measured, those users must be fully engaged with the relevant systems and services. That is they must be able to function as part of the digital economy. The paper refers to this ‘user functionality’ as t-Engagement. Not all users are t-Engaged and in many instances achieving t-Engagement will require assistance from external sources. This paper identifies the current state of Australia’s regional digital economy readiness and highlights the role of Local Government Authorities (‘LGAs’) in enabling t-Engagement. The paper analyses responses to the 2012 BTA, NBN and Digital Economy Survey by LGA and other regional organizations within Australia. The paper’s particular focus is on the level of use by Local Government Authorities of federal, state and other programs designed to enable t-Engagement. The analysis confirms the role of LGAs in enabling t-Engagement and in promoting Australia’s digital economy. The paper concludes by reinforcing the need to ensure ongoing meaningful federal and State support of regional initiatives, as well as identifying issues requiring specific attention.
Resumo:
In the last decade, smartphones have gained widespread usage. Since the advent of online application stores, hundreds of thousands of applications have become instantly available to millions of smart-phone users. Within the Android ecosystem, application security is governed by digital signatures and a list of coarse-grained permissions. However, this mechanism is not fine-grained enough to provide the user with a sufficient means of control of the applications' activities. Abuse of highly sensible private information such as phone numbers without users' notice is the result. We show that there is a high frequency of privacy leaks even among widely popular applications. Together with the fact that the majority of the users are not proficient in computer security, this presents a challenge to the engineers developing security solutions for the platform. Our contribution is twofold: first, we propose a service which is able to assess Android Market applications via static analysis and provide detailed, but readable reports to the user. Second, we describe a means to mitigate security and privacy threats by automated reverse-engineering and refactoring binary application packages according to the users' security preferences.
Resumo:
This paper will compare and evaluate the effectiveness of commercial media lobbying and advocacy against public service media in two countries, the United Kingdom and Australia. The paper will focus empirically on the commercial media coverage of public service media issues in these countries (relating to the BBC and ABC respectively) over the period since the election of the Conservative-led Coalition in Britain in June 2010, and the election of the Gillard government in Australia in August 2010. Reference will be made to preceding periods as relevant to an understanding of the current environment. In both countries the main commercial media rival to public service media is News Corp and its associated organisations – News Ltd and Sky News in Australia, and News International and BSkyB in the UK. The paper will examine with analysis of print and online news and commentary content how News Corp outlets have reported and commented on the activities and plans of public service media as the latter have developed and extended their presence on digital TV and online platforms. It will also consider the responses of the ABC and BBC to these interventions. It will consider, thirdly, the responses of Australian and British governments to these debates, and the policy outcomes. This section of the paper will seek to evaluate the trajectory of the policy-public-private dynamic in recent years, and to draw conclusions as to the future direction of policy. Particular attention will be devoted to recent key moments in this unfolding dialogue. In Britain, debates around the efforts of News Corp to take over 100% of BSkyB, both before and after the breaking of the phone-hacking scandal in July 2011; in Australia, the debate around the National Broadband Network and the competitive tender process for ABC World, that country’s public service transnational broadcaster; and other key moments where rivalry between News Corp companies and public service media became mainstream news stories provoking wider public debate. The paper will conclude with recommendations as to how public service media organisations might engage constructively with commercial organisations in the future, including News Corp, and taking into account emerging technological and financial challenges to traditional rationales for public service provision.
Resumo:
Organizations make increasingly use of social media in order to compete for customer awareness and improve the quality of their goods and services. Multiple techniques of social media analysis are already in use. Nevertheless, theoretical underpinnings and a sound research agenda are still unavailable in this field at the present time. In order to contribute to setting up such an agenda, we introduce digital social signal processing (DSSP) as a new research stream in IS that requires multi-facetted investigations. Our DSSP concept is founded upon a set of four sequential activities: sensing digital social signals that are emitted by individuals on social media; decoding online data of social media in order to reconstruct digital social signals; matching the signals with consumers’ life events; and configuring individualized goods and service offerings tailored to the individual needs of customers. We further contribute to tying loose ends of different research areas together, in order to frame DSSP as a field for further investigation. We conclude with developing a research agenda.
Resumo:
High-speed broadband internet access is widely recognised as a catalyst to social and economic development. However, the provision of broadband Internet services with the existing solutions to rural population, scattered over an extensive geographical area, remains both an economic and technical challenge. As a feasible solution, the Commonwealth Scientific and Industrial Research Organization (CSIRO) proposed a highly spectrally efficient, innovative and cost-effective fixed wireless broadband access technology, which uses analogue TV frequency spectrum and Multi-User MIMO (MUMIMO) technology with Orthogonal-Frequency-Division-Multiplexing (OFDM). MIMO systems have emerged as a promising solution for the increasing demand of higher data rates, better quality of service, and higher network capacity. However, the performance of MIMO systems can be significantly affected by different types of propagation environments e.g., indoor, outdoor urban, or outdoor rural and operating frequencies. For instance, large spectral efficiencies associated with MIMO systems, which assume a rich scattering environment in urban environments, may not be valid for all propagation environments, such as outdoor rural environments, due to the presence of less scatterer densities. Since this is the first time a MU-MIMO-OFDM fixed broadband wireless access solution is deployed in a rural environment, questions from both theoretical and practical standpoints arise; For example, what capacity gains are available for the proposed solution under realistic rural propagation conditions?. Currently, no comprehensive channel measurement and capacity analysis results are available for MU-MIMO-OFDM fixed broadband wireless access systems which employ large scale multiple antennas at the Access Point (AP) and analogue TV frequency spectrum in rural environments. Moreover, according to the literature, no deterministic MU-MIMO channel models exist that define rural wireless channels by accounting for terrain effects. This thesis fills the aforementioned knowledge gaps with channel measurements, channel modeling and comprehensive capacity analysis for MU-MIMO-OFDM fixed wireless broadband access systems in rural environments. For the first time, channel measurements were conducted in a rural farmland near Smithton, Tasmania using CSIRO's broadband wireless access solution. A novel deterministic MU-MIMO-OFDM channel model, which can be used for accurate performance prediction of rural MUMIMO channels with dominant Line-of-Sight (LoS) paths, was developed under this research. Results show that the proposed solution can achieve 43.7 bits/s/Hz at a Signal-to- Noise Ratio (SNR) of 20 dB in rural environments. Based on channel measurement results, this thesis verifies that the deterministic channel model accurately predicts channel capacity in rural environments with a Root Mean Square (RMS) error of 0.18 bits/s/Hz. Moreover, this study presents a comprehensive capacity analysis of rural MU-MIMOOFDM channels using experimental, simulated and theoretical models. Based on the validated deterministic model, further investigations on channel capacity and the eects of capacity variation, with different user distribution angles (θ) around the AP, were analysed. For instance, when SNR = 20dB, the capacity increases from 15.5 bits/s/Hz to 43.7 bits/s/Hz as θ increases from 10° to 360°. Strategies to mitigate these capacity degradation effects are also presented by employing a suitable user grouping method. Outcomes of this thesis have already been used by CSIRO scientists to determine optimum user distribution angles around the AP, and are of great significance for researchers and MU-MUMO-OFDM system developers to understand the advantages and potential capacity gains of MU-MIMO systems in rural environments. Also, results of this study are useful to further improve the performance of MU-MIMO-OFDM systems in rural environments. Ultimately, this knowledge contribution will be useful in delivering efficient, cost-effective high-speed wireless broadband systems that are tailor-made for rural environments, thus, improving the quality of life and economic prosperity of rural populations.
Resumo:
The promise of ‘big data’ has generated a significant deal of interest in the development of new approaches to research in the humanities and social sciences, as well as a range of important critical interventions which warn of an unquestioned rush to ‘big data’. Drawing on the experiences made in developing innovative ‘big data’ approaches to social media research, this paper examines some of the repercussions for the scholarly research and publication practices of those researchers who do pursue the path of ‘big data’–centric investigation in their work. As researchers import the tools and methods of highly quantitative, statistical analysis from the ‘hard’ sciences into computational, digital humanities research, must they also subscribe to the language and assumptions underlying such ‘scientificity’? If so, how does this affect the choices made in gathering, processing, analysing, and disseminating the outcomes of digital humanities research? In particular, is there a need to rethink the forms and formats of publishing scholarly work in order to enable the rigorous scrutiny and replicability of research outcomes?
Resumo:
Many aspects of China's academic publishing system differ from the systems found in liberal market based economies of the United States, Western Europe and Australia. A high level of government intervention in both the publishing industry and academia and the challenges associated with attempting to make a transition from a centrally controlled towards a more market based publishing industry are two notable differences; however, as in other countries, academic communities and publishers are being transformed by digital technologies. This research explores the complex yet dynamic digital transformation of academic publishing in China, with a specific focus of the open and networked initiatives inspired by Web 2.0 and social media. The thesis draws on two case studies: Science Paper Online, a government-operated online preprint platform and open access mandate; and New Science, a social reference management website operated by a group of young PhD students. Its analysis of the innovations, business models, operating strategies, influences, and difficulties faced by these two initiatives highlights important characteristics and trends in digital publishing experiments in China. The central argument of this thesis is that the open and collaborative possibilities of Web 2.0 inspired initiatives are emerging outside the established journal and monograph publishing system in China, introducing innovative and somewhat disruptive approaches to the certification, communication and commercial exploitation of knowledge. Moreover, emerging publishing models are enabling and encouraging a new system of practising and communicating science in China, putting into practice some elements of the Open Science ethos. There is evidence of both disruptive change to old publishing structures and the adaptive modification of emergent replacements in the Chinese practice. As such, the transformation from traditional to digital and interactive modes of publishing, involves both competition and convergence between new and old publishers, as well as dynamics of co-evolution involving new technologies, business models, social norms, and government reform agendas. One key concern driving this work is whether there are new opportunities and new models for academic publishing in the Web 2.0 age and social media environment, which might allow the basic functions of communication and certification to be achieved more effectively. This thesis enriches existing knowledge of open and networked transformations of scholarly publishing by adding a Chinese story. Although the development of open and networked publishing platforms in China remains in its infancy, the lessons provided by this research are relevant to practitioners and stakeholders interested in understanding the transformative dynamics of networked technologies for publishing and advocating open access in practice, not only in China, but also internationally.
Resumo:
The majority of today's undergraduate students are 'digital natives'; a generation born into a world shaped by digital technologies. It is important to understand the significance of this when considering how to teach Digital Media to digital natives. This paper examines the analogies to literacy that recur in digital native debates. It argues that if the concept of digital literacy is to be useful, educators must attend to the multiple layers and proficiencies that comprise literacy. Rather than completely dispose of old teaching methods, updated pedagogical practices should integrate analysis and critique with exploratory and creative modes of learning.