391 resultados para Interactive Techniques
Resumo:
The increased adoption of business process management approaches, tools and practices, has led organizations to accumulate large collections of business process models. These collections can easily include hundred to thousand models, especially in the context of multinational corporations or as a result of organizational mergers and acquisitions. A concrete problem is thus how to maintain these large repositories in such a way that their complexity does not hamper their practical usefulness as a means to describe and communicate business operations. This paper proposes a technique to automatically infer suitable names for business process models and fragments thereof. This technique is useful for model abstraction scenarios, as for instance when user-specific views of a repository are required, or as part of a refactoring initiative aimed to simplify the repository’s complexity. The technique is grounded in an adaptation of the theory of meaning to the realm of business process models. We implemented the technique in a prototype tool and conducted an extensive evaluation using three process model collections from practice and a case study involving process modelers with different experience.
Resumo:
“The Cube” is a unique facility that combines 48 large multi-touch screens and very large-scale projection surfaces to form one of the world’s largest interactive learning and engagement spaces. The Cube facility is part of the Queensland University of Technology’s (QUT) newly established Science and Engineering Centre, designed to showcase QUT’s teaching and research capabilities in the STEM (Science, Technology, Engineering, and Mathematics) disciplines. In this application paper we describe, the Cube, its technical capabilities, design rationale and practical day-to-day operations, supporting up to 70,000 visitors per week. Essential to the Cube’s operation are five interactive applications designed and developed in tandem with the Cube’s technical infrastructure. Each of the Cube’s launch applications was designed and delivered by an independent team, while the overall vision of the Cube was shepherded by a small executive team. The diversity of design, implementation and integration approaches pursued by these five teams provides some insight into the challenges, and opportunities, presented when working with large distributed interaction technologies. We describe each of these applications in order to discuss the different challenges and user needs they address, which types of interactions they support and how they utilise the capabilities of the Cube facility.
Resumo:
Genomic DNA obtained from patient whole blood samples is a key element for genomic research. Advantages and disadvantages, in terms of time-efficiency, cost-effectiveness and laboratory requirements, of procedures available to isolate nucleic acids need to be considered before choosing any particular method. These characteristics have not been fully evaluated for some laboratory techniques, such as the salting out method for DNA extraction, which has been excluded from comparison in different studies published to date. We compared three different protocols (a traditional salting out method, a modified salting out method and a commercially available kit method) to determine the most cost-effective and time-efficient method to extract DNA. We extracted genomic DNA from whole blood samples obtained from breast cancer patient volunteers and compared the results of the product obtained in terms of quantity (concentration of DNA extracted and DNA obtained per ml of blood used) and quality (260/280 ratio and polymerase chain reaction product amplification) of the obtained yield. On average, all three methods showed no statistically significant differences between the final result, but when we accounted for time and cost derived for each method, they showed very significant differences. The modified salting out method resulted in a seven- and twofold reduction in cost compared to the commercial kit and traditional salting out method, respectively and reduced time from 3 days to 1 hour compared to the traditional salting out method. This highlights a modified salting out method as a suitable choice to be used in laboratories and research centres, particularly when dealing with a large number of samples.
Resumo:
Results of an interlaboratory comparison on size characterization of SiO2 airborne nanoparticles using on-line and off-line measurement techniques are discussed. This study was performed in the framework of Technical Working Area (TWA) 34—“Properties of Nanoparticle Populations” of the Versailles Project on Advanced Materials and Standards (VAMAS) in the project no. 3 “Techniques for characterizing size distribution of airborne nanoparticles”. Two types of nano-aerosols, consisting of (1) one population of nanoparticles with a mean diameter between 30.3 and 39.0 nm and (2) two populations of non-agglomerated nanoparticles with mean diameters between, respectively, 36.2–46.6 nm and 80.2–89.8 nm, were generated for characterization measurements. Scanning mobility particle size spectrometers (SMPS) were used for on-line measurements of size distributions of the produced nano-aerosols. Transmission electron microscopy, scanning electron microscopy, and atomic force microscopy were used as off-line measurement techniques for nanoparticles characterization. Samples were deposited on appropriate supports such as grids, filters, and mica plates by electrostatic precipitation and a filtration technique using SMPS controlled generation upstream. The results of the main size distribution parameters (mean and mode diameters), obtained from several laboratories, were compared based on metrological approaches including metrological traceability, calibration, and evaluation of the measurement uncertainty. Internationally harmonized measurement procedures for airborne SiO2 nanoparticles characterization are proposed.
Resumo:
A significant amount of speech is typically required for speaker verification system development and evaluation, especially in the presence of large intersession variability. This paper introduces a source and utterance duration normalized linear discriminant analysis (SUN-LDA) approaches to compensate session variability in short-utterance i-vector speaker verification systems. Two variations of SUN-LDA are proposed where normalization techniques are used to capture source variation from both short and full-length development i-vectors, one based upon pooling (SUN-LDA-pooled) and the other on concatenation (SUN-LDA-concat) across the duration and source-dependent session variation. Both the SUN-LDA-pooled and SUN-LDA-concat techniques are shown to provide improvement over traditional LDA on NIST 08 truncated 10sec-10sec evaluation conditions, with the highest improvement obtained with the SUN-LDA-concat technique achieving a relative improvement of 8% in EER for mis-matched conditions and over 3% for matched conditions over traditional LDA approaches.
Resumo:
Two longitudinal experiments were conducted exploring emotional experiences with PIDs over six months including media and medial Portable Interactive Devices (PIDs). Results identifying the impact of negative social and personal interactions on the overall emotional experience as well as different task categories (Features, Functional, Mediation and Auxiliary) and their corresponding emotional responses have previously been reported [2,3,4,5]. This paper builds on these findings and presents the Designing for Evolving Emotional Experience (DE3) framework promoting positive (and deals with negative) emotional experiences with PIDs including a set of principles to better understand emotional experiences. To validate the DE3 framework a preliminary trial was conducted with five practicing industrial designers. The trial required them to consider initial design concepts using the DE3 framework followed by a questionnaire asking about their use of the framework for concept development. The trial aimed to analyse the effectiveness, efficiency and usefulness of the framework in assisting in the development of initial concepts for PIDs taking into account emotional experiences. Common themes regarding the framework are outlined including the ease of use, the effectiveness in focusing on the personal and social contexts and positive ratings regarding its use. Overall the feedback from the preliminary trial was encouraging with responses suggesting that the framework was accessible, rated highly and most importantly permitted designers to consider emotional experiences during concept development. The paper concludes with a discussion regarding the future development of the DE3 framework and the potential implications to design theory and the design discipline.
Resumo:
A people-to-people matching system (or a match-making system) refers to a system in which users join with the objective of meeting other users with the common need. Some real-world examples of these systems are employer-employee (in job search networks), mentor-student (in university social networks), consume-to-consumer (in marketplaces) and male-female (in an online dating network). The network underlying in these systems consists of two groups of users, and the relationships between users need to be captured for developing an efficient match-making system. Most of the existing studies utilize information either about each of the users in isolation or their interaction separately, and develop recommender systems using the one form of information only. It is imperative to understand the linkages among the users in the network and use them in developing a match-making system. This study utilizes several social network analysis methods such as graph theory, small world phenomenon, centrality analysis, density analysis to gain insight into the entities and their relationships present in this network. This paper also proposes a new type of graph called “attributed bipartite graph”. By using these analyses and the proposed type of graph, an efficient hybrid recommender system is developed which generates recommendation for new users as well as shows improvement in accuracy over the baseline methods.
Resumo:
An experiment in large scale, live, game design and public performance, bringing together participants from across the creative arts to design, deliver and document a project that was both a cooperative learning experience and an experimental public performance. The four month project, funded by the Edge Digital Centre, culminated into a 24 hour ARG event involving over 100 participants in December 2012. Using the premise of a viral outbreak, young enthusiasts auditioned for the roles of Survivor, Zombie, Medic and Military. The main objective was for the Survivors to complete a series of challenges over 24 hours, while the other characters fulfilled their opposing objectives of interference and sabotage supported by both scripted and free-form scenarios staged in constructed scenes throughout the venues. The event was set in the State Library of Queensland and the Edge Digital Centre who granted the project full access, night and day to all areas including public, office and underground areas. These venues were transformed into cinematic settings full of interactive props and various audio-visual effects. The ZomPoc Project was an innovative experiment in writing and directing a large scale, live, public performance, bringing together participants from across the creative industries. In order to design such an event a number of innovative resources were developed exploiting techniques of game design, theatre, film, television and tangible media production. A series of workshops invited local artists, scientists, technicians and engineers to find new ways of collaborating to create networked artifacts, experimental digital works, robotic props, modular set designs, sound effects and unique costuming guided by an innovative multi-platform script developed by Deb Polson. The result of this collaboration was the creation of innovative game and set props, both atmospheric and interactive. Such works animated the space, presented story clues and facilitated interactions between strangers who found themselves sharing a unique experience in unexpected places.
Resumo:
Using cooperative learning in classrooms promotes academic achievement, communication skills, problem-solving, social skills and student motivation. Yet it is reported that cooperative learning as a Western educational concept may be ineffective in Asian cultural contexts. The study aims to investigate the utilisation of scaffolding techniques for cooperative learning in Thailand primary mathematics classes. A teacher training program was designed to foster Thai primary school teachers’ cooperative learning implementation. Two teachers participated in this experimental program for one and a half weeks and then implemented cooperative learning strategies in their mathematics classes for six weeks. The data collected from teacher interviews and classroom observations indicates that the difficulty or failure of implementing cooperative learning in Thailand education may not be directly derived from cultural differences. Instead, it does indicate that Thai culture can be constructively merged with cooperative learning through a teacher training program and practices of scaffolding techniques.
Resumo:
Airport efficiency is important because it has a direct impact on customer safety and satisfaction and therefore the financial performance and sustainability of airports, airlines, and affiliated service providers. This is especially so in a world characterized by an increasing volume of both domestic and international air travel, price and other forms of competition between rival airports, airport hubs and airlines, and rapid and sometimes unexpected changes in airline routes and carriers. It also reflects expansion in the number of airports handling regional, national, and international traffic and the growth of complementary airport facilities including industrial, commercial, and retail premises. This has fostered a steadily increasing volume of research aimed at modeling and providing best-practice measures and estimates of airport efficiency using mathematical and econometric frontiers. The purpose of this chapter is to review these various methods as they apply to airports throughout the world. Apart from discussing the strengths and weaknesses of the different approaches and their key findings, the paper also examines the steps faced by researchers as they move through the modeling process in defining airport inputs and outputs and the purported efficiency drivers. Accordingly, the chapter provides guidance to those conducting empirical research on airport efficiency and serves as an aid for aviation regulators and airport operators among others interpreting airport efficiency research outcomes.
Resumo:
This paper presents a comparative study on the response of a buried tunnel to surface blast using the arbitrary Lagrangian-Eulerian (ALE) and smooth particle hydrodynamics (SPH) techniques. Since explosive tests with real physical models are extremely risky and expensive, the results of a centrifuge test were used to validate the numerical techniques. The numerical study shows that the ALE predictions were faster and closer to the experimental results than those from the SPH simulations which over predicted the strains. The findings of this research demonstrate the superiority of the ALE modelling techniques for the present study. They also provide a comprehensive understanding of the preferred ALE modelling techniques which can be used to investigate the surface blast response of underground tunnels.
Resumo:
There is growing scholarly interest in the everyday work undertaken by screen producers in part prompted by disciplinary shifts (the ‘material turn’, the rise of creative industries research) and in part by major transformations in the business of media production and consumption in recent years. However, the production cultures and motivations of screen producers, particularly those working in emergent online and convergent media markets, remain poorly understood. The 2012 Australian Screen Producer survey, building upon research undertaken in the Australian Screen Content Producer Survey conducted in 2009, was a nation-wide survey-based study of screen content producers working in four industry segments: film, television, corporate and new media production. The broad objectives of the 2012 Australian Producer Survey study were to: • Provide deeper and more detailed analysis into the nature of digital media producers and their practices and how these findings compare to the practices of established screen media producers; • Interrogate issues around the pace of industry change, industry sentiment and how producers are adapting to a changing marketplace; and • Offer insight into the transitional pathways of established media producers into production for digital media markets. The Australian Screen Producer Survey Online Interactive provides users (principally filmmakers, scholars and policymakers) with direct access to raw survey data through an interactive website that allows them to customise queries according to particular interests. The Online Interactive therefore provides customisable findings – unlike ‘static’ research outputs – delineating the practices, attitudes, strategies, and aspirations of screen producers working in feature film, television and corporate production as well as those operating in an increasingly convergent digital media marketplace. The survey was developed by researchers at the ARC Centre of Excellence for Creative Industries and Innovation (CCI), Queensland University of Technology, Deakin University, the Centre for Screen Business at Australian Film Television and Radio School (AFTRS) and was undertaken in association with Bergent Research. The Online Interactive website (http://screenproducersurvey.com/) was developed with support from the Centre for Memory Imagination and Invention (CMII).
Resumo:
A recent success story of the Australian videogames industry is Brisbane based Halfbrick Studios, developer of the hit game for mobile devices, Fruit Ninja. Halfbrick not only survived the global financial crisis and an associated downturn in the Australian industry, but grew strongly, moving rapidly from developing licensed properties for platforms such as Game Boy Advance, Nintendo DS, and Playstation Portable (PSP) to becoming an independent developer and publisher of in-house titles, generating revenue both through App downloads and merchandise sales. Amongst the reasons for Halfbrick’s success is their ability to adaptively transform by addressing different technical platforms, user dynamics, business models and market conditions. Our ongoing case-study research from 2010 into Halfbrick’s innovation processes, culminating with some 10 semi-structured interviews with senior managers and developers, has identified a strong focus on workplace organisational culture, with staff reflecting that the company is a flat, team-based organisation devolving as much control as possible to the development teams directly, and encouraging a work-life balance in which creativity can thrive. The success of this strategy is evidenced through Halfbrick’s low staff turnover; amongst our interviewees most of the developers had been with the company for a number of years, with all speaking positively of the workplace culture and sense of creative autonomy they enjoyed. Interviews with the CEO, Shainiel Deo, and team leaders highlighted the autonomy afforded to each team and the organisation and management of the projects on which they work. Deo and team leaders emphasised the collaboration and communication skills they require in the developers that they employ, and that these characteristics were considered just as significant in hiring decisions as technical skills. Halfbrick’s developers celebrate their workplace culture and insist it has contributed to their capacity for innovation and to their commercial success with titles such as Fruit Ninja. This model of organisational management is reflected in both Stark’s (2009) idea of heterarchy, and Neff’s (2012) concept of venture labour, and provides a different perspective on the industry than the traditional political economy critique of precarious labour exploited by gaming conglomerates. Nevertheless, throughout many of the interviews and in our informal discussions with Halfbrick developers there is also a sense that this rewarding culture is quite tenuous and precarious in the context of a rapidly changing and uncertain global videogames industry. Whether such a workplace culture represents the future of the games industry, or is merely a ‘Prague Spring’ before companies such as Halfbrick are swallowed by traditional players’ remains to be seen. However, as the process of rapid and uncertain transformation plays out across the videogames industry, it is important to pay attention to emerging modes of organisation and workplace culture, even whilst they remain at the margins of the industry. In this paper we investigate Halfbrick’s workplace culture and ask how sustainable is this kind of rewarding and creative workplace?
Resumo:
This paper proposes techniques to improve the performance of i-vector based speaker verification systems when only short utterances are available. Short-length utterance i-vectors vary with speaker, session variations, and the phonetic content of the utterance. Well established methods such as linear discriminant analysis (LDA), source-normalized LDA (SN-LDA) and within-class covariance normalisation (WCCN) exist for compensating the session variation but we have identified the variability introduced by phonetic content due to utterance variation as an additional source of degradation when short-duration utterances are used. To compensate for utterance variations in short i-vector speaker verification systems using cosine similarity scoring (CSS), we have introduced a short utterance variance normalization (SUVN) technique and a short utterance variance (SUV) modelling approach at the i-vector feature level. A combination of SUVN with LDA and SN-LDA is proposed to compensate the session and utterance variations and is shown to provide improvement in performance over the traditional approach of using LDA and/or SN-LDA followed by WCCN. An alternative approach is also introduced using probabilistic linear discriminant analysis (PLDA) approach to directly model the SUV. The combination of SUVN, LDA and SN-LDA followed by SUV PLDA modelling provides an improvement over the baseline PLDA approach. We also show that for this combination of techniques, the utterance variation information needs to be artificially added to full-length i-vectors for PLDA modelling.