915 resultados para Emulators (Computer programs)
Resumo:
Large digital screens are becoming prevalent across today’s cities dispersing into everyday urban spaces such as public squares and cultural precincts. Examples, such as Federation Square, demonstrate the opportunities for using digital screens to create a sense of place and to add long-term social, cultural and economic value for citizens, who live and work in those precincts. However, the challenge of implementing digital screens in new urban developments is to ensure they respond appropriately to the physical and sociocultural environment in which they are placed. Considering the increasing rate at which digital screens are being embedded into public spaces, it is surprising that the programs running on these screens still seem to be stuck in the cinematic model. The availability of advanced networking and interaction technologies offers opportunities for information access that goes beyond free-to-air television and advertising. This chapter revisits the history and current state of digital screens in urban life and discusses a series of research studies that involve digital screens as interface between citizens and the city. Instead of focusing on technological concerns, the chapter presents a holistic analysis of these studies, with the aim to move towards a more comprehensive understanding of the sociocultural potential of this new media platform, and how the digital content is linked with the spatial quality of the physical space, as well as the place and role of digital screens within the smart city movement.
Resumo:
This paper describes the development and use of personas, a Human Computer Interaction (HCI) research methodology, within the STIMulate peer learning program, in order to better understand student behaviour patterns and motivations. STIMulate is a support for learning program at the Queensland University of Technology (QUT) in Brisbane, Australia. The program provides assistance in mathematics, science and information technology (IT) for course work students. A STIMulate space is provided for students to study and obtain one-on-one assistance from Peer Learning Facilitators (PLFs), who are experienced students that have excelled in relevant subject areas. This paper describes personas – archetypal users - that represent the motivations and behavioural patterns of students that utilise STIMulate (particularly the IT stream). The personas were developed based on interviews with PLFs, and subsequently validated by a PLF focus group. Seven different personas were developed. The personas enable us to better understand the characteristics of the students utilising the STIMulate program. The research provides a clearer picture of visiting student motivations and behavioural patterns. This has helped us identify gaps in the services provided, and be more aware of our assumptions about students. The personas have been deployed in PLF training programs, to help PLFs provide a better service to the students. The research findings suggest further study on the resonances between some students and PLFs, which we would like to better elicit.
Resumo:
Companies such as NeuroSky and Emotiv Systems are selling non-medical EEG devices for human computer interaction. These devices are significantly more affordable than their medical counterparts, and are mainly used to measure levels of engagement, focus, relaxation and stress. This information is sought after for marketing research and games. However, these EEG devices have the potential to enable users to interact with their surrounding environment using thoughts only, without activating any muscles. In this paper, we present preliminary results that demonstrate that despite reduced voltage and time sensitivity compared to medical-grade EEG systems, the quality of the signals of the Emotiv EPOC neuroheadset is sufficiently good in allowing discrimina tion between imaging events. We collected streams of EEG raw data and trained different types of classifiers to discriminate between three states (rest and two imaging events). We achieved a generalisation error of less than 2% for two types of non-linear classifiers.
Resumo:
Bird species richness survey is one of the most intriguing ecological topics for evaluating environmental health. Here, bird species richness denotes the number of unique bird species in a particular area. Factors affecting the investigation of bird species richness include weather, observation bias, and most importantly, the prohibitive costs of conducting surveys at large spatiotemporal scales. Thanks to advances in recording techniques, these problems have been alleviated by deploying sensors for acoustic data collection. Although automated detection techniques have been introduced to identify various bird species, the innate complexity of bird vocalizations, the background noise present in the recording and the escalating volumes of acoustic data pose a challenging task on determination of bird species richness. In this paper we proposed a two-step computer-assisted sampling approach for determining bird species richness in one-day acoustic data. First, a classification model is built based on acoustic indices for filtering out minutes that contain few bird species. Then the classified bird minutes are ordered by an acoustic index and the redundant temporal minutes are removed from the ranked minute sequence. The experimental results show that our method is more efficient in directing experts for determination of bird species compared with the previous methods.
Resumo:
This study investigated the development and operation of Learner Driver Mentor Programs (LDMPs). LDMPs are used throughout Australia to assist young learner drivers to gain supervised on-road driving experience through coordinated access to vehicles and supervisors. There is a significant lack of research regarding these programs. In this study, 41 stakeholders including representatives from existing or ceased LDMPs as well as representatives of other groups completed a questionnaire in either survey or interview format. The questionnaire sought information about the objectives of LDMPs, any social problems that were targeted as well as the characteristics of an ideal program and what could be done to improve them. Stakeholders indicated that LDMPs were targeted at local communities and, therefore, there should be a clear local need for the program as well as community ownership and involvement in the program. Additionally, the program needed to be accessible and provide clear positive outcomes for mentees. The most common suggestion to improve LDMPs related to the provision of greater funding and sponsorship, particularly in relation to the vehicles used within the programs. LDMPs appear to have an important role in facilitating young learner drivers to acquire the appropriate number of supervised hours of driving practice. However, while a number of factors appear related to a successful program, the program must remain flexible and suitable for its local community. There is a clear need to complete evaluations of existing programs to ensure that future LDMPs and modifications to existing programs are evidence-based.
Resumo:
Many software applications extend their functionality by dynamically loading libraries into their allocated address space. However, shared libraries are also often of unknown provenance and quality and may contain accidental bugs or, in some cases, deliberately malicious code. Most sandboxing techniques which address these issues require recompilation of the libraries using custom tool chains, require significant modifications to the libraries, do not retain the benefits of single address-space programming, do not completely isolate guest code, or incur substantial performance overheads. In this paper we present LibVM, a sandboxing architecture for isolating libraries within a host application without requiring any modifications to the shared libraries themselves, while still retaining the benefits of a single address space and also introducing a system call inter-positioning layer that allows complete arbitration over a shared library’s functionality. We show how to utilize contemporary hardware virtualization support towards this end with reasonable performance overheads and, in the absence of such hardware support, our model can also be implemented using a software-based mechanism. We ensure that our implementation conforms as closely as possible to existing shared library manipulation functions, minimizing the amount of effort needed to apply such isolation to existing programs. Our experimental results show that it is easy to gain immediate benefits in scenarios where the goal is to guard the host application against unintentional programming errors when using shared libraries, as well as in more complex scenarios, where a shared library is suspected of being actively hostile. In both cases, no changes are required to the shared libraries themselves.
Resumo:
Although robotics research has seen advances over the last decades robots are still not in widespread use outside industrial applications. Yet a range of proposed scenarios have robots working together, helping and coexisting with humans in daily life. In all these a clear need to deal with a more unstructured, changing environment arises. I herein present a system that aims to overcome the limitations of highly complex robotic systems, in terms of autonomy and adaptation. The main focus of research is to investigate the use of visual feedback for improving reaching and grasping capabilities of complex robots. To facilitate this a combined integration of computer vision and machine learning techniques is employed. From a robot vision point of view the combination of domain knowledge from both imaging processing and machine learning techniques, can expand the capabilities of robots. I present a novel framework called Cartesian Genetic Programming for Image Processing (CGP-IP). CGP-IP can be trained to detect objects in the incoming camera streams and successfully demonstrated on many different problem domains. The approach requires only a few training images (it was tested with 5 to 10 images per experiment) is fast, scalable and robust yet requires very small training sets. Additionally, it can generate human readable programs that can be further customized and tuned. While CGP-IP is a supervised-learning technique, I show an integration on the iCub, that allows for the autonomous learning of object detection and identification. Finally this dissertation includes two proof-of-concepts that integrate the motion and action sides. First, reactive reaching and grasping is shown. It allows the robot to avoid obstacles detected in the visual stream, while reaching for the intended target object. Furthermore the integration enables us to use the robot in non-static environments, i.e. the reaching is adapted on-the- fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. The second integration highlights the capabilities of these frameworks, by improving the visual detection by performing object manipulation actions.
Resumo:
Eklundh's (1972) algorithm to transpose a large matrix stored on an external device such as a disc has been programmed and tested. A simple description of computer implementation is given in this note.
Resumo:
Most alcohol education programs are designed by experts, with the target audience largely excluded from this process. Theoretically, application of co-creation which comprises co-design and co- production offers an opportunity to better orient programs to meet audience needs and wants and thereby enhance program outcomes. To date, research focus has centred on value co-creation with content co-design receiving limited research attention. The current study seeks to understand how young people would design an intervention and continues by contrasting an audience designed program with the earlier implemented expert designed program.
Resumo:
The benefits that accrue from the use of design database include (i) reduced costs of preparing data for application programs and of producing the final specification, and (ii) possibility of later usage of data stored in the database for other applications related to Computer Aided Engineering (CAE). An INTEractive Relational GRAphics Database (INTERGRAD) based on relational models has been developed to create, store, retrieve and update the data related to two dimensional drawings. INTERGRAD provides two languages, Picture Definition Language (PDL) and Picture Manipulation Language (PML). The software package has been implemented on a PDP 11/35 system under the RSX-11M version 3.1 operating system and uses the graphics facility consisting of a VT-11 graphics terminal, the DECgraphic 11 software and an input device, a lightpen.
Resumo:
The aim of the pedigree-based genome mapping project is to investigate and develop systems for implementing marker assisted selection to improve the efficiency of selection and increase the rate of genetic gain in breeding programs. Pedigree-based whole genome marker application provides a vehicle for incorporating marker technologies into applied breeding programs by bridging the gap between marker-trait association and marker implementation. We report on the development of protocols for implementation of pedigree-based whole genome marker analysis in breeding programs within the Australian northern winter cereals region. Examples of applications from the Queensland DPI&F wheat and barley breeding programs are provided, commenting on the use of microsatellites and other types of molecular markers for routine genomic analysis, the integration of genotypic, phenotypic and pedigree information for targeted wheat and barley lines, the genomic impacts of strong selection pressure in case study pedigrees, and directions for future pedigree-based marker development and analysis.
Resumo:
Hazard site surveillance is a system for post-border detection of new pest incursions, targeting sites that are considered potentially at high risk of such introductions. Globalisation, increased volumes of containerised freight and competition for space at domestic ports means that goods are increasingly being first opened at premises some distance from the port of entry, thus dispersing risk away from the main inspection point. Hazard site surveillance acts as a backstop to border control to ensure that new incursions are detected sufficiently early to allow the full range of management options, including eradication and containment, to be considered. This is particularly important for some of the more cryptic forest pests whose presence in a forest often is not discovered until populations are already high and the pest is well established. General requirements for a hazard site surveillance program are discussed using a program developed in Brisbane, Australia, in 2006 as a case study. Some early results from the Brisbane program are presented. In total 67 species and 5757 individuals of wood-boring beetles have been trapped and identified during the program to date. Scolytines are the most abundant taxa, making up 83% of the catch. No new exotics have been trapped but 19 of the species and 60% of all specimens caught are exotics that are already established in Australia.
Resumo:
Distraction in the workplace is increasingly more common in the information age. Several tasks and sources of information compete for a worker's limited cognitive capacities in human-computer interaction (HCI). In some situations even very brief interruptions can have detrimental effects on memory. Nevertheless, in other situations where persons are continuously interrupted, virtually no interruption costs emerge. This dissertation attempts to reveal the mental conditions and causalities differentiating the two outcomes. The explanation, building on the theory of long-term working memory (LTWM; Ericsson and Kintsch, 1995), focuses on the active, skillful aspects of human cognition that enable the storage of task information beyond the temporary and unstable storage provided by short-term working memory (STWM). Its key postulate is called a retrieval structure an abstract, hierarchical knowledge representation built into long-term memory that can be utilized to encode, update, and retrieve products of cognitive processes carried out during skilled task performance. If certain criteria of practice and task processing are met, LTWM allows for the storage of large representations for long time periods, yet these representations can be accessed with the accuracy, reliability, and speed typical of STWM. The main thesis of the dissertation is that the ability to endure interruptions depends on the efficiency in which LTWM can be recruited for maintaing information. An observational study and a field experiment provide ecological evidence for this thesis. Mobile users were found to be able to carry out heavy interleaving and sequencing of tasks while interacting, and they exhibited several intricate time-sharing strategies to orchestrate interruptions in a way sensitive to both external and internal demands. Interruptions are inevitable, because they arise as natural consequences of the top-down and bottom-up control of multitasking. In this process the function of LTWM is to keep some representations ready for reactivation and others in a more passive state to prevent interference. The psychological reality of the main thesis received confirmatory evidence in a series of laboratory experiments. They indicate that after encoding into LTWM, task representations are safeguarded from interruptions, regardless of their intensity, complexity, or pacing. However, when LTWM cannot be deployed, the problems posed by interference in long-term memory and the limited capacity of the STWM surface. A major contribution of the dissertation is the analysis of when users must resort to poorer maintenance strategies, like temporal cues and STWM-based rehearsal. First, one experiment showed that task orientations can be associated with radically different patterns of retrieval cue encodings. Thus the nature of the processing of the interface determines which features will be available as retrieval cues and which must be maintained by other means. In another study it was demonstrated that if the speed of encoding into LTWM, a skill-dependent parameter, is slower than the processing speed allowed for by the task, interruption costs emerge. Contrary to the predictions of competing theories, these costs turned out to involve intrusions in addition to omissions. Finally, it was learned that in rapid visually oriented interaction, perceptual-procedural expectations guide task resumption, and neither STWM nor LTWM are utilized due to the fact that access is too slow. These findings imply a change in thinking about the design of interfaces. Several novel principles of design are presented, basing on the idea of supporting the deployment of LTWM in the main task.
Resumo:
The present study examined how personality and social psychological factors affect third and fourth graders' computer-mediated communication. Personality was analysed in terms of the following strategies: optimism, pessimism and defensive pessimism. Students worked either individually or in dyads which were paired homogeneously or heterogeneously according to the strategies. Moreover, the present study compared horizontal and vertical interaction. The study also examined the role that popularity plays, and students were divided into groups based on their popularity level. The results show that an optimistic strategy is useful. Optimism was found to be related to the active production and processing of ideas. Although previous research has identified drawbacks to pessimism in achievement settings, this study shows that the pessimistic strategy is not as debilitating a strategy as is usually assumed. Pessimistic students were able to process their ideas. However, defensive pessimists were somewhat cautious in introducing or changing ideas. Heterogeneous dyads were not beneficial configurations with respect to producing, introducing, or changing ideas. Moreover, many differences were found to exist between the horizontal and vertical interaction; specifically, the students expressed more opinions and feelings when teachers took no part in the discussions. Strong emotions were observed especially in the horizontal interaction. Further, group working skills were found to be more important for boys than for girls, while rejected students were not at a disadvantage compared to popular ones. Schools can encourage emotional and social learning. The present study shows that students can use computers to express their feelings. In addition, students who are unpopular in non-computer contexts or students who use pessimism can benefit from computers. Participation in computer discussions can give unpopular children a chance to develop confidence when relating to peers.