865 resultados para digital model
Training the public to collect oral histories of our community : the OHAA Queensland Chapter’s model
Resumo:
In a digital age, the skills required to undertake an oral history project have changed dramatically. For community groups, this shift can be new and exciting, but can also invoke feelings of anxiety when there is a gap in the skill set. Addressing this gap is one of Oral History Association of Australia, Queensland (OHAA Qld) main activities. This paper reports on the OHAA Qld chapter’s oral history workshop program, which was radically altered in 2011.
Resumo:
The authors present a Cause-Effect fault diagnosis model, which utilises the Root Cause Analysis approach and takes into account the technical features of a digital substation. The Dempster/Shafer evidence theory is used to integrate different types of fault information in the diagnosis model so as to implement a hierarchical, systematic and comprehensive diagnosis based on the logic relationship between the parent and child nodes such as transformer/circuit-breaker/transmission-line, and between the root and child causes. A real fault scenario is investigated in the case study to demonstrate the developed approach in diagnosing malfunction of protective relays and/or circuit breakers, miss or false alarms, and other commonly encountered faults at a modern digital substation.
Resumo:
Building Web 2.0 sites does not necessarily ensure the success of the site. We aim to better understand what improves the success of a site by drawing insight from biologically inspired design patterns. Web 2.0 sites provide a mechanism for human interaction enabling powerful intercommunication between massive volumes of users. Early Web 2.0 site providers that were previously dominant are being succeeded by newer sites providing innovative social interaction mechanisms. Understanding what site traits contribute to this success drives research into Web sites mechanics using models to describe the associated social networking behaviour. Some of these models attempt to show how the volume of users provides a self-organising and self-contextualisation of content. One model describing coordinated environments is called stigmergy, a term originally describing coordinated insect behavior. This paper explores how exploiting stigmergy can provide a valuable mechanism for identifying and analysing online user behavior specifically when considering that user freedom of choice is restricted by the provided web site functionality. This will aid our building better collaborative Web sites improving the collaborative processes.
Resumo:
The recognition that Web 2.0 applications and social media sites will strengthen and improve interaction between governments and citizens has resulted in a global push into new e-democracy or Government 2.0 spaces. These typically follow government-to-citizen (g2c) or citizen-to-citizen (c2c) models, but both these approaches are problematic: g2c is often concerned more with service delivery to citizens as clients, or exists to make a show of ‘listening to the public’ rather than to genuinely source citizen ideas for government policy, while c2c often takes place without direct government participation and therefore cannot ensure that the outcomes of citizen deliberations are accepted into the government policy-making process. Building on recent examples of Australian Government 2.0 initiatives, we suggest a new approach based on government support for citizen-to-citizen engagement, or g4c2c, as a workable compromise, and suggest that public service broadcasters should play a key role in facilitating this model of citizen engagement.
Resumo:
The digital humanities are growing rapidly in response to a rise in Internet use. What humanists mostly work on, and which forms much of the contents of our growing repositories, are digital surrogates of originally analog artefacts. But is the data model upon which many of those surrogates are based – embedded markup – adequate for the task? Or does it in fact inhibit reusability and flexibility? To enhance interoperability of resources and tools, some changes to the standard markup model are needed. Markup could be removed from the text and stored in standoff form. The versions of which many cultural heritage texts are composed could also be represented externally, and computed automatically. These changes would not disrupt existing data representations, which could be imported without significant data loss. They would also enhance automation and ease the increasing burden on the modern digital humanist.
Resumo:
This paper develops a framework for classifying term dependencies in query expansion with respect to the role terms play in structural linguistic associations. The framework is used to classify and compare the query expansion terms produced by the unigram and positional relevance models. As the unigram relevance model does not explicitly model term dependencies in its estimation process it is often thought to ignore dependencies that exist between words in natural language. The framework presented in this paper is underpinned by two types of linguistic association, namely syntagmatic and paradigmatic associations. It was found that syntagmatic associations were a more prevalent form of linguistic association used in query expansion. Paradoxically, it was the unigram model that exhibited this association more than the positional relevance model. This surprising finding has two potential implications for information retrieval models: (1) if linguistic associations underpin query expansion, then a probabilistic term dependence assumption based on position is inadequate for capturing them; (2) the unigram relevance model captures more term dependency information than its underlying theoretical model suggests, so its normative position as a baseline that ignores term dependencies should perhaps be reviewed.
Resumo:
This paper develops and evaluates an enhanced corpus based approach for semantic processing. Corpus based models that build representations of words directly from text do not require pre-existing linguistic knowledge, and have demonstrated psychologically relevant performance on a number of cognitive tasks. However, they have been criticised in the past for not incorporating sufficient structural information. Using ideas underpinning recent attempts to overcome this weakness, we develop an enhanced tensor encoding model to build representations of word meaning for semantic processing. Our enhanced model demonstrates superior performance when compared to a robust baseline model on a number of semantic processing tasks.
Resumo:
A strongly progressive surveying and mapping industry depends on a shared understanding of the industry as it exists, some shared vision or imagination of what the industry might become, and some shared action plan capable of bringing about a realisation of that vision. The emphasis on sharing implies a need for consensus reached through widespread discussion and mutual understanding. Unless this occurs, concerted action is unlikely. A more likely outcome is that industry representatives will negate each other's efforts in their separate bids for progress. The process of bringing about consensual viewpoints is essentially one of establishing an industry identity. Establishing the industry's identity and purpose is a prerequisite for rational development of the industry's education and training, its promotion and marketing, and operational research that can deal .with industry potential and efficiency. This paper interprets evolutionary developments occurring within Queensland's surveying and mapping industry within a framework that sets out logical requirements for a viable industry.
Resumo:
This presentation will deal with the transformations that have occurred in news journalism worldwide in the early 21st century. I will argue that they have been the most significant changes to the profession for 100 years, and the challenges facing the news media industry in responding to them are substantial, as are those facing journalism education. It will develop this argument in relation to the crisis of the newspaper business model, and why social media, blogging and citizen journalism have not filled the gap left by the withdrawal of resources from traditional journalism. It will also draw upon Wikileaks as a case study in debates about computational and data-driven journalism, and whether large-scale "leaks" of electronic documents may be the future of investigative journalism.
Resumo:
This work-in-progress paper presents an ensemble-based model for detecting and mitigating Distributed Denial-of-Service (DDoS) attacks, and its partial implementation. The model utilises network traffic analysis and MIB (Management Information Base) server load analysis features for detecting a wide range of network and application layer DDoS attacks and distinguishing them from Flash Events. The proposed model will be evaluated against realistic synthetic network traffic generated using a software-based traffic generator that we have developed as part of this research. In this paper, we summarise our previous work, highlight the current work being undertaken along with preliminary results obtained and outline the future directions of our work.
Resumo:
The promotion of resilience (the capacity of an individual or community to bounce back and recover from adversity) has become an important area of public health. In recent years it has expanded into the digital domain, and many online applications have been developed to promote children's resilience. In this study, it is argued that the majority of existing applications are limited because they take a didactic approach, and conceive of interaction as providing navigational choices. Because they simply provide information about resilience or replicate offline, scenario-based strategies, the understanding of resilience they provide is confined to a few, predetermined factors. In this study I propose a new, experiential approach to promoting resilience digitally. I define resilience as an emergent, situated and context-specific phenomenon. Using a Participatory Design model in combination with a salutogenic (strength-based) health methodology, this project has involved approximately 50 children as co-designers and co-researchers over two years. The children have contributed to the design of a new set of interactive resilience tools, which facilitate resilience promotion through dialogic and experiential learning. The major outcomes of this study include a new methodology for developing digital resilience tools, a new set of tools that have been developed and evaluated in collaboration with children and a set of design principles to guide future development. Beyond these initial and tangible outcomes, this study has also established that the benefits of introducing Participatory Design into a health promoting model rests primarily in the change of the role of children from "users" of technology and education to co-designers, where they assume a leadership role in both designing the tools and in directing their resilience learning.
Resumo:
The rapid increase in the deployment of CCTV systems has led to a greater demand for algorithms that are able to process incoming video feeds. These algorithms are designed to extract information of interest for human operators. During the past several years, there has been a large effort to detect abnormal activities through computer vision techniques. Typically, the problem is formulated as a novelty detection task where the system is trained on normal data and is required to detect events which do not fit the learned `normal' model. Many researchers have tried various sets of features to train different learning models to detect abnormal behaviour in video footage. In this work we propose using a Semi-2D Hidden Markov Model (HMM) to model the normal activities of people. The outliers of the model with insufficient likelihood are identified as abnormal activities. Our Semi-2D HMM is designed to model both the temporal and spatial causalities of the crowd behaviour by assuming the current state of the Hidden Markov Model depends not only on the previous state in the temporal direction, but also on the previous states of the adjacent spatial locations. Two different HMMs are trained to model both the vertical and horizontal spatial causal information. Location features, flow features and optical flow textures are used as the features for the model. The proposed approach is evaluated using the publicly available UCSD datasets and we demonstrate improved performance compared to other state of the art methods.
Resumo:
Flexible information exchange is critical to successful design-analysis integration, but current top-down, standards-based and model-oriented strategies impose restrictions that contradicts this flexibility. In this article we present a bottom-up, user-controlled and process-oriented approach to linking design and analysis applications that is more responsive to the varied needs of designers and design teams. Drawing on research into scientific workflows, we present a framework for integration that capitalises on advances in cloud computing to connect discrete tools via flexible and distributed process networks. We then discuss how a shared mapping process that is flexible and user friendly supports non-programmers in creating these custom connections. Adopting a services-oriented system architecture, we propose a web-based platform that enables data, semantics and models to be shared on the fly. We then discuss potential challenges and opportunities for its development as a flexible, visual, collaborative, scalable and open system.
Resumo:
Flexible information exchange is critical to successful design integration, but current top-down, standards-based and model-oriented strategies impose restrictions that are contradictory to this flexibility. In this paper we present a bottom-up, user-controlled and process-oriented approach to linking design and analysis applications that is more responsive to the varied needs of designers and design teams. Drawing on research into scientific workflows, we present a framework for integration that capitalises on advances in cloud computing to connect discrete tools via flexible and distributed process networks. Adopting a services-oriented system architecture, we propose a web-based platform that enables data, semantics and models to be shared on the fly. We discuss potential challenges and opportunities for the development thereof as a flexible, visual, collaborative, scalable and open system.
Resumo:
This paper deals with the transformations that have occurred in news journalism worldwide in the early 21st century. I argue that they havebeen the most significant changes to the profession for 100 years, and the challenges facing the news media industry in responding to them are substantial, as are those facing journalism education. This argument is developed in relation to the crisis of the newspaper business model, and why social media, blogging and citizen journalism have not filled the gap left by the withdrawal of resources from traditional journalism. It also draws upon Wikileaks as a case study in debates about computational and data-driven journalism, and whether large-scale "leaks" of electronic documents may be the future of investigative journalism.