339 resultados para ELECTRIFIED INTERFACES
Resumo:
Video games have shown great potential as tools that both engage and motivate players to achieve tasks and build communities in fantasy worlds. We propose that the application of game elements to real world activities can aid in delivering contextual information in interesting ways and help young people to engage in everyday events. Our research will explore how we can unite utility and fun to enhance information delivery, encourage participation, build communities and engage users with utilitarian events situated in the real world. This research aims to identify key game elements that work effectively to engage young digital natives, and provide guidelines to influence the design of interactions and interfaces for event applications in the future. This research will primarily contribute to areas of user experience and pervasive gaming.
Resumo:
Though stadium style seating in large lecture theatres may suggest otherwise, effective teaching and learning is a not a spectator sport. A challenge in creating effective learning environments in both physical and virtual spaces is to provide optimal opportunity for student engagement in active learning. Queensland University of Technology (QUT) has developed the Open Web Lecture (OWL), a new web-based student response application, which seamlessly integrates a virtual learning environment within the physical learning space. The result is a blended learning experience; a fluid collaboration between academic and students connected to OWL via the University’s Wi-Fi using their own laptop or mobile web device. QUT is currently piloting the OWL application to encourage student engagement. OWL offers opportunities for participants to: • Post comments and questions • Reply to comments • "Like" comments • Poll students and review data • Review archived sessions. Many of these features instinctively appeal to student users of social networking media, yet avail the academic of control within the University network. Student privacy is respected through a system of preserving peer-peer anonymity, a functionality that seeks to address a traditional reluctance to speak up in large classes. The pilot is establishing OWL as an opportunity for engaging students in active learning opportunities by enabling • virtual learning in physical spaces for large group lectures, seminar groups, workshops and conferences • live collaborative technology connecting students and the academic via the wireless network using their own laptop or mobile device • an non- intimidating environment in which to ask questions • promotion of a sense of community • instant feedback • problem based learning. The student and academic response to OWL has been overwhelmingly positive, crediting OWL as an easy to use application, which creates effective learning opportunities though interactivity and immediate feedback. This poster and accompanying online presentation of the technology will demonstrate how OWL offers new possibilities for active learning in physical spaces by: • providing increased opportunity for student engagement • supporting a range of learners and learning activities • fostering blended learning experiences. The presentation will feature visual displays of the technology, its various interfaces and feedback including clips from interviews with students and academics participating in the early stages of the pilot.
Resumo:
Purpose: Web search engines are frequently used by people to locate information on the Internet. However, not all queries have an informational goal. Instead of information, some people may be looking for specific web sites or may wish to conduct transactions with web services. This paper aims to focus on automatically classifying the different user intents behind web queries. Design/methodology/approach: For the research reported in this paper, 130,000 web search engine queries are categorized as informational, navigational, or transactional using a k-means clustering approach based on a variety of query traits. Findings: The research findings show that more than 75 percent of web queries (clustered into eight classifications) are informational in nature, with about 12 percent each for navigational and transactional. Results also show that web queries fall into eight clusters, six primarily informational, and one each of primarily transactional and navigational. Research limitations/implications: This study provides an important contribution to web search literature because it provides information about the goals of searchers and a method for automatically classifying the intents of the user queries. Automatic classification of user intent can lead to improved web search engines by tailoring results to specific user needs. Practical implications: The paper discusses how web search engines can use automatically classified user queries to provide more targeted and relevant results in web searching by implementing a real time classification method as presented in this research. Originality/value: This research investigates a new application of a method for automatically classifying the intent of user queries. There has been limited research to date on automatically classifying the user intent of web queries, even though the pay-off for web search engines can be quite beneficial. © Emerald Group Publishing Limited.
Resumo:
Purpose – The work presented in this paper aims to provide an approach to classifying web logs by personal properties of users. Design/methodology/approach – The authors describe an iterative system that begins with a small set of manually labeled terms, which are used to label queries from the log. A set of background knowledge related to these labeled queries is acquired by combining web search results on these queries. This background set is used to obtain many terms that are related to the classification task. The system then ranks each of the related terms, choosing those that most fit the personal properties of the users. These terms are then used to begin the next iteration. Findings – The authors identify the difficulties of classifying web logs, by approaching this problem from a machine learning perspective. By applying the approach developed, the authors are able to show that many queries in a large query log can be classified. Research limitations/implications – Testing results in this type of classification work is difficult, as the true personal properties of web users are unknown. Evaluation of the classification results in terms of the comparison of classified queries to well known age-related sites is a direction that is currently being exploring. Practical implications – This research is background work that can be incorporated in search engines or other web-based applications, to help marketing companies and advertisers. Originality/value – This research enhances the current state of knowledge in short-text classification and query log learning. Classification schemes, Computer networks, Information retrieval, Man-machine systems, User interfaces
Resumo:
Technologies and languages for integrated processes are a relatively recent innovation. Over that period many divergent waves of innovation have transformed process integration. Like sockets and distributed objects, early workflow systems ordered programming interfaces that connected the process modelling layer to any middleware. BPM systems emerged later, connecting the modelling world to middleware through components. While BPM systems increased ease of use (modelling convenience), long-standing and complex interactions involving many process instances remained di±cult to model. Enterprise Service Buses (ESBs), followed, connecting process models to heterogeneous forms of middleware. ESBs, however, generally forced modellers to choose a particular underlying middleware and to stick to it, despite their ability to connect with many forms of middleware. Furthermore ESBs encourage process integrations to be modelled on their own, logically separate from the process model. This can lead to the inability to reason about long standing conversations at the process layer. Technologies and languages for process integration generally lack formality. This has led to arbitrariness in the underlying language building blocks. Conceptual holes exist in a range of technologies and languages for process integration and this can lead to customer dissatisfaction and failure to bring integration projects to reach their potential. Standards for process integration share similar fundamental flaws to languages and technologies. Standards are also in direct competition with other standards causing a lack of clarity. Thus the area of greatest risk in a BPM project remains process integration, despite major advancements in the technology base. This research examines some fundamental aspects of communication middleware and how these fundamental building blocks of integration can be brought to the process modelling layer in a technology agnostic manner. This way process modelling can be conceptually complete without becoming stuck in a particular middleware technology. Coloured Petri nets are used to define a formal semantics for the fundamental aspects of communication middleware. They provide the means to define and model the dynamic aspects of various integration middleware. Process integration patterns are used as a tool to codify common problems to be solved. Object Role Modelling is a formal modelling technique that was used to define the syntax of a proposed process integration language. This thesis provides several contributions to the field of process integration. It proposes a framework defining the key notions of integration middleware. This framework provides a conceptual foundation upon which a process integration language could be built. The thesis defines an architecture that allows various forms of middleware to be aggregated and reasoned about at the process layer. This thesis provides a comprehensive set of process integration patterns. These constitute a benchmark for the kinds of problems a process integration language must support. The thesis proposes a process integration modelling language and a partial implementation that is able to enact the language. A process integration pilot project in a German hospital is brie°y described at the end of the thesis. The pilot is based on ideas in this thesis.
Resumo:
The exhibition consists of a series of 9 large-scale cotton rag prints, printed from digital files, and a sound and picture animation on DVD composed of drawings, sound, analogue and digital photographs, and Super 8 footage. The exhibition represents the artist’s experience of Singapore during her residency. Source imagery was gathered from photographs taken at the Bukit Brown abandoned Chinese Cemetery in Singapore, and Australian native gardens in Parkville Melbourne. Historical sources include re-photographed Singapore 19th and early 20th century postcard images. The works use analogue, hand-drawn and digital imaging, still and animated, to explore the digital interface’s ability to combine mixed media. This practice stems from the digital imaging practice of layering, using various media editing software. The work is innovative in that it stretches the idea of the layer composition in a single image by setting each layer into motion using animation techniques. This creates a multitude of permutations and combinations as the two layers move in different rhythmic patterns. The work also represents an innovative collaboration between the photographic practitioner and a sound composer, Duncan King-Smith, who designed sound for the animation based on concepts of trance, repetition and abstraction. As part of the Art ConneXions program, the work travelled to numerous international venues including: Space 217 Singapore, RMIT Gallery Melbourne, National Museum Jakarta, Vietnam Fine Arts Museum Hanoi, and ifa (Institut fur Auslandsbeziehungen) Gallery in both Stuttgart and Berlin.
Resumo:
For people with intellectual disabilities there are significant barriers to inclusion in socially cooperative endeavours. This paper investigates the effectiveness of Stomp, a tangible user interface (TUI) designed to provide new participatory experiences for people with intellectual disability. Results from an observational study reveal the extent to which the Stomp system supports social and physical interaction. The tangible, spatial and embodied qualities of Stomp result in an experience that does not rely on the acquisition of specific competencies before interaction and engagement can occur.
Resumo:
Virtual environments can provide, through digital games and online social interfaces, extremely exciting forms of interactive entertainment. Because of their capability in displaying and manipulating information in natural and intuitive ways, such environments have found extensive applications in decision support, education and training in the health and science domains amongst others. Currently, the burden of validating both the interactive functionality and visual consistency of a virtual environment content is entirely carried out by developers and play-testers. While considerable research has been conducted in assisting the design of virtual world content and mechanics, to date, only limited contributions have been made regarding the automatic testing of the underpinning graphics software and hardware. The aim of this thesis is to determine whether the correctness of the images generated by a virtual environment can be quantitatively defined, and automatically measured, in order to facilitate the validation of the content. In an attempt to provide an environment-independent definition of visual consistency, a number of classification approaches were developed. First, a novel model-based object description was proposed in order to enable reasoning about the color and geometry change of virtual entities during a play-session. From such an analysis, two view-based connectionist approaches were developed to map from geometry and color spaces to a single, environment-independent, geometric transformation space; we used such a mapping to predict the correct visualization of the scene. Finally, an appearance-based aliasing detector was developed to show how incorrectness too, can be quantified for debugging purposes. Since computer games heavily rely on the use of highly complex and interactive virtual worlds, they provide an excellent test bed against which to develop, calibrate and validate our techniques. Experiments were conducted on a game engine and other virtual worlds prototypes to determine the applicability and effectiveness of our algorithms. The results show that quantifying visual correctness in virtual scenes is a feasible enterprise, and that effective automatic bug detection can be performed through the techniques we have developed. We expect these techniques to find application in large 3D games and virtual world studios that require a scalable solution to testing their virtual world software and digital content.
Resumo:
Video is commonly used as a method for recording embodied interaction for purposes of analysis and design and has been proposed as a useful ‘material’ for interaction designers to engage with. But video is not a straight forward reproduction of embodied activity – in themselves video recordings ‘flatten’ the space of embodied interaction, they impose a perspective on unfolding action, and remove the embodied spatial and social context within which embodied interaction unfolds. This does not mean that video is not a useful medium with which to engage as part of a process of investigating and designing for embodied interaction – but crucially, it requires that as people attempting to engage with video, designers own bodies and bodily understandings must be engaged with and brought into play. This paper describes and reflects upon our experiences of engaging with video in two different activities as part of a larger research project investigating the design of gestural interfaces for a dental surgery context.
Resumo:
Companies and their services are being increasingly exposed to global business networks and Internet-based ondemand services. Much of the focus is on flexible orchestration and consumption of services, beyond ownership and operational boundaries of services. However, ways in which third-parties in the “global village” can seamlessly self-create new offers out of existing services remains open. This paper proposes a framework for service provisioning in global business networks that allows an open-ended set of techniques for extending services through a rich, multi-tooling environment. The Service Provisioning Management Framework, as such, supports different modeling techniques, through supportive tools, allowing different parts of services to be integrated into new contexts. Integration of service user interfaces, business processes, operational interfaces and business object are supported. The integration specifications that arise from service extensions are uniformly reflected through a kernel technique, the Service Integration Technique. Thus, the framework preserves coherence of service provisioning tasks without constraining the modeling techniques needed for extending different aspects of services.
Resumo:
Cities accumulate and distribute vast sets of digital information. Many decision-making and planning processes in councils, local governments and organisations are based on both real-time and historical data. Until recently, only a small, carefully selected subset of this information has been released to the public – usually for specific purposes (e.g. train timetables, release of planning application through websites to name just a few). This situation is however changing rapidly. Regulatory frameworks, such as the Freedom of Information Legislation in the US, the UK, the European Union and many other countries guarantee public access to data held by the state. One of the results of this legislation and changing attitudes towards open data has been the widespread release of public information as part of recent Government 2.0 initiatives. This includes the creation of public data catalogues such as data.gov.au (U.S.), data.gov.uk (U.K.), data.gov.au (Australia) at federal government levels, and datasf.org (San Francisco) and data.london.gov.uk (London) at municipal levels. The release of this data has opened up the possibility of a wide range of future applications and services which are now the subject of intensified research efforts. Previous research endeavours have explored the creation of specialised tools to aid decision-making by urban citizens, councils and other stakeholders (Calabrese, Kloeckl & Ratti, 2008; Paulos, Honicky & Hooker, 2009). While these initiatives represent an important step towards open data, they too often result in mere collections of data repositories. Proprietary database formats and the lack of an open application programming interface (API) limit the full potential achievable by allowing these data sets to be cross-queried. Our research, presented in this paper, looks beyond the pure release of data. It is concerned with three essential questions: First, how can data from different sources be integrated into a consistent framework and made accessible? Second, how can ordinary citizens be supported in easily composing data from different sources in order to address their specific problems? Third, what are interfaces that make it easy for citizens to interact with data in an urban environment? How can data be accessed and collected?
Resumo:
The Social Web is a torrent of real-time information and an emerging discipline is now focussed on harnessing this information flow for analysis of themes, opinions and sentiment. This short paper reports on early work on designing better user interfaces for end users in manipulating the outcomes from these analysis engines.
Resumo:
The design of pre-contoured fracture fixation implants (plates and nails) that correctly fit the anatomy of a patient utilises 3D models of long bones with accurate geometric representation. 3D data is usually available from computed tomography (CT) scans of human cadavers that generally represent the above 60 year old age group. Thus, despite the fact that half of the seriously injured population comes from the 30 year age group and below, virtually no data exists from these younger age groups to inform the design of implants that optimally fit patients from these groups. Hence, relevant bone data from these age groups is required. The current gold standard for acquiring such data–CT–involves ionising radiation and cannot be used to scan healthy human volunteers. Magnetic resonance imaging (MRI) has been shown to be a potential alternative in the previous studies conducted using small bones (tarsal bones) and parts of the long bones. However, in order to use MRI effectively for 3D reconstruction of human long bones, further validations using long bones and appropriate reference standards are required. Accurate reconstruction of 3D models from CT or MRI data sets requires an accurate image segmentation method. Currently available sophisticated segmentation methods involve complex programming and mathematics that researchers are not trained to perform. Therefore, an accurate but relatively simple segmentation method is required for segmentation of CT and MRI data. Furthermore, some of the limitations of 1.5T MRI such as very long scanning times and poor contrast in articular regions can potentially be reduced by using higher field 3T MRI imaging. However, a quantification of the signal to noise ratio (SNR) gain at the bone - soft tissue interface should be performed; this is not reported in the literature. As MRI scanning of long bones has very long scanning times, the acquired images are more prone to motion artefacts due to random movements of the subject‟s limbs. One of the artefacts observed is the step artefact that is believed to occur from the random movements of the volunteer during a scan. This needs to be corrected before the models can be used for implant design. As the first aim, this study investigated two segmentation methods: intensity thresholding and Canny edge detection as accurate but simple segmentation methods for segmentation of MRI and CT data. The second aim was to investigate the usability of MRI as a radiation free imaging alternative to CT for reconstruction of 3D models of long bones. The third aim was to use 3T MRI to improve the poor contrast in articular regions and long scanning times of current MRI. The fourth and final aim was to minimise the step artefact using 3D modelling techniques. The segmentation methods were investigated using CT scans of five ovine femora. The single level thresholding was performed using a visually selected threshold level to segment the complete femur. For multilevel thresholding, multiple threshold levels calculated from the threshold selection method were used for the proximal, diaphyseal and distal regions of the femur. Canny edge detection was used by delineating the outer and inner contour of 2D images and then combining them to generate the 3D model. Models generated from these methods were compared to the reference standard generated using the mechanical contact scans of the denuded bone. The second aim was achieved using CT and MRI scans of five ovine femora and segmenting them using the multilevel threshold method. A surface geometric comparison was conducted between CT based, MRI based and reference models. To quantitatively compare the 1.5T images to the 3T MRI images, the right lower limbs of five healthy volunteers were scanned using scanners from the same manufacturer. The images obtained using the identical protocols were compared by means of SNR and contrast to noise ratio (CNR) of muscle, bone marrow and bone. In order to correct the step artefact in the final 3D models, the step was simulated in five ovine femora scanned with a 3T MRI scanner. The step was corrected using the iterative closest point (ICP) algorithm based aligning method. The present study demonstrated that the multi-threshold approach in combination with the threshold selection method can generate 3D models from long bones with an average deviation of 0.18 mm. The same was 0.24 mm of the single threshold method. There was a significant statistical difference between the accuracy of models generated by the two methods. In comparison, the Canny edge detection method generated average deviation of 0.20 mm. MRI based models exhibited 0.23 mm average deviation in comparison to the 0.18 mm average deviation of CT based models. The differences were not statistically significant. 3T MRI improved the contrast in the bone–muscle interfaces of most anatomical regions of femora and tibiae, potentially improving the inaccuracies conferred by poor contrast of the articular regions. Using the robust ICP algorithm to align the 3D surfaces, the step artefact that occurred by the volunteer moving the leg was corrected, generating errors of 0.32 ± 0.02 mm when compared with the reference standard. The study concludes that magnetic resonance imaging, together with simple multilevel thresholding segmentation, is able to produce 3D models of long bones with accurate geometric representations. The method is, therefore, a potential alternative to the current gold standard CT imaging.
Resumo:
The study shows an alternative solution to existing efforts at solving the problem of how to centrally manage and synchronise users’ Multiple Profiles (MP) across multiple discrete social networks. Most social network users hold more than one social network account and utilise them in different ways depending on the digital context (Iannella, 2009a). They may, for example, enjoy friendly chat on Facebook1, professional discussion on LinkedIn2, and health information exchange on PatientsLikeMe3 In this thesis the researcher proposes a framework for the management of a user’s multiple online social network profiles. A demonstrator, called Multiple Profile Manager (MPM), will be showcased to illustrate how effective the framework will be. The MPM will achieve the required profile management and synchronisation using a free, open, decentralized social networking platform (OSW) that was proposed by the Vodafone Group in 2010. The proposed MPM will enable a user to create and manage an integrated profile (IP) and share/synchronise this profile with all their social networks. The necessary protocols to support the prototype are also proposed by the researcher. The MPM protocol specification defines an Extensible Messaging and Presence Protocol (XMPP) extension for sharing vCard and social network accounts information between the MPM Server, MPM Client, and social network sites (SNSs). . Therefore many web users need to manage disparate profiles across many distributed online sources. Maintaining these profiles is cumbersome, time-consuming, inefficient, and may lead to lost opportunity. The writer of this thesis adopted a research approach and a number of use cases for the implementation of the project. The use cases were created to capture the functional requirements of the MPM and to describe the interactions between users and the MPM. In the research a development process was followed in establishing the prototype and related protocols. The use cases were subsequently used to illustrate the prototype via the screenshots taken of the MPM client interfaces. The use cases also played a role in evaluating the outcomes of the research such as the framework, prototype, and the related protocols. An innovative application of this project is in the area of public health informatics. The researcher utilised the prototype to examine how the framework might benefit patients and physicians. The framework can greatly enhance health information management for patients and more importantly offer a more comprehensive personal health overview of patients to physicians. This will give a more complete picture of the patient’s background than is currently available and will prove helpful in providing the right treatment. The MPM prototype and related protocols have a high application value as they can be integrated into the real OSW platform and so serve users in the modern digital world. They also provide online users with a real platform for centrally storing their complete profile data, efficiently managing their personal information, and moreover, synchronising the overall complete profile with each of their discrete profiles stored in their different social network sites.
Resumo:
Identifying, modelling and documenting business processes usually requires the collaboration of many stakeholders that may be spread across companies in inter-organizational business settings. While there are many process modelling tools available, the support they provide for remote collaboration is still limited. This paper investigates the application of virtual environment and augmented reality technologies to remote business process modelling, with an aim to assisting common collaboration tasks by providing an increased sense of immersion in a shared workspace. We report on the evaluation of a prototype system with five key informants. The results indicate that this approach to business process modelling is suited to remote collaborative task settings, and stakeholders may indeed benefit from using augmented reality interfaces.