919 resultados para Computer Game Testing
Resumo:
Computer vision is much more than a technique to sense and recover environmental information from an UAV. It should play a main role regarding UAVs’ functionality because of the big amount of information that can be extracted, its possible uses and applications, and its natural connection to human driven tasks, taking into account that vision is our main interface to world understanding. Our current research’s focus lays on the development of techniques that allow UAVs to maneuver in spaces using visual information as their main input source. This task involves the creation of techniques that allow an UAV to maneuver towards features of interest whenever a GPS signal is not reliable or sufficient, e.g. when signal dropouts occur (which usually happens in urban areas, when flying through terrestrial urban canyons or when operating on remote planetary bodies), or when tracking or inspecting visual targets—including moving ones—without knowing their exact UMT coordinates. This paper also investigates visual serving control techniques that use velocity and position of suitable image features to compute the references for flight control. This paper aims to give a global view of the main aspects related to the research field of computer vision for UAVs, clustered in four main active research lines: visual serving and control, stereo-based visual navigation, image processing algorithms for detection and tracking, and visual SLAM. Finally, the results of applying these techniques in several applications are presented and discussed: this study will encompass power line inspection, mobile target tracking, stereo distance estimation, mapping and positioning.
Resumo:
New media, as a free and universal communication tool, has had an impact on the power of the general public to comment on a variety of issues. As the public can comment favourably or unfavourably on advertisements, such as on Youtube, the advertising industry must start using weblogs to research reaction to their advertising campaigns. This exploratory study examines the responses of some advertising industry practitioners, both advertisers and agencies, on the impact of new media, specifically weblogs, and the use of new media as a source of research on advertising campaigns.
Resumo:
Computer forensics is the process of gathering and analysing evidence from computer systems to aid in the investigation of a crime. Typically, such investigations are undertaken by human forensic examiners using purpose-built software to discover evidence from a computer disk. This process is a manual one, and the time it takes for a forensic examiner to conduct such an investigation is proportional to the storage capacity of the computer's disk drives. The heterogeneity and complexity of various data formats stored on modern computer systems compounds the problems posed by the sheer volume of data. The decision to undertake a computer forensic examination of a computer system is a decision to commit significant quantities of a human examiner's time. Where there is no prior knowledge of the information contained on a computer system, this commitment of time and energy occurs with little idea of the potential benefit to the investigation. The key contribution of this research is the design and development of an automated process to describe a computer system and its activity for the purposes of a computer forensic investigation. The term proposed for this process is computer profiling. A model of a computer system and its activity has been developed over the course of this research. Using this model a computer system, which is the subj ect of investigation, can be automatically described in terms useful to a forensic investigator. The computer profiling process IS resilient to attempts to disguise malicious computer activity. This resilience is achieved by detecting inconsistencies in the information used to infer the apparent activity of the computer. The practicality of the computer profiling process has been demonstrated by a proof-of concept software implementation. The model and the prototype implementation utilising the model were tested with data from real computer systems. The resilience of the process to attempts to disguise malicious activity has also been demonstrated with practical experiments conducted with the same prototype software implementation.
Resumo:
In this age of evidence-based practice, nurses are increasingly expected to use research evidence in a systematic and judicious way when making decisions about patient care practices. Clinicians recognise the role of research when it provides valid, realistic answers in practical situations. Nonetheless, research is still perceived by some nurses as external to practice and implementing research findings into practice is often difficult. Since its conceptual platform in the 1960s, the emergence and growth of Nursing Development Units, and later, Practice Development Units has been described in the literature as strategic, organisational vehicles for changing the way nurses think about nursing by promoting and supporting a culture of inquiry and research-based practice. Thus, some scholars argue that practice development is situated in the gap between research and practice. Since the 1990s, the discourse has shifted from the structure and outcomes of developing practice to the process of developing practice, using a Practice Development methodology; underpinned by critical social science theory, as a vehicle for changing the culture and context of care. The nursing and practice development literature is dominated by descriptive reports of local practice development activity, typically focusing on reflection on processes or outcomes of processes, and describing perceived benefits. However, despite the volume of published literature, there is little published empirical research in the Australian or international context on the effectiveness of Practice Development as a methodology for changing the culture and context of care - leaving a gap in the literature. The aim of this study was to develop, implement and evaluate the effectiveness of a Practice Development model for clinical practice review and change on changing the culture and context of care for nurses working in an acute care setting. A longitudinal, pre-test/post-test, non-equivalent control group design was used to answer the following research questions: 1. Is there a relationship between nurses' perceptions of the culture and context of care and nurses' perceptions of research and evidence-based practice? 2. Is there a relationship between engagement in a facilitated process of Practice Development and change in nurses' perceptions of the culture and context of care? 3. Is there a relationship between engagement in a facilitated process of Practice Development and change in nurses' perceptions of research and evidence-based practice? Through a critical analysis of the literature and synthesis of the findings of past evaluations of Nursing and Practice Development structures and processes, this research has identified key attributes consistent throughout the chronological and theoretical development of Nursing and Practice Development that exemplify a culture and context of care that is conducive to creating a culture of inquiry and evidence-based practice. The study findings were then used in the development, validation and testing of an instrument to measure change in the culture and context of care. Furthermore, this research has also provided empirical evidence of the relationship of the key attributes to each other and to barriers to research and evidence-based practice. The research also provides empirical evidence regarding the effectiveness of a Practice Development methodology in changing the culture and context of care. This research is noteworthy in its contribution to advancing the discipline of nursing by providing evidence of the degree to which attributes of the culture and context of care, namely autonomy and control, workplace empowerment and constructive team dynamics, can be connected to engagement with research and evidence-based practice.
Resumo:
The focus of this article is on the proposed consumer guarantees component of the Australian Consumer Law. The Productivity Commission (PC), in its review of Australia’s Consumer Policy Framework, noted that it had not ‘undertaken the detailed analysis necessary to reach a judgment on the adequacy or otherwise of the existing regulation in this area, or the merits of alternative models such as those adopted in countries such as New Zealand’. Accordingly, it recommended that: ‘The adequacy of existing legislation related to implied warranties and conditions should be examined as part of the development of the new national generic consumer law’.
Resumo:
Digital forensics investigations aim to find evidence that helps confirm or disprove a hypothesis about an alleged computer-based crime. However, the ease with which computer-literate criminals can falsify computer event logs makes the prosecutor's job highly challenging. Given a log which is suspected to have been falsified or tampered with, a prosecutor is obliged to provide a convincing explanation for how the log may have been created. Here we focus on showing how a suspect computer event log can be transformed into a hypothesised actual sequence of events, consistent with independent, trusted sources of event orderings. We present two algorithms which allow the effort involved in falsifying logs to be quantified, as a function of the number of `moves' required to transform the suspect log into the hypothesised one, thus allowing a prosecutor to assess the likelihood of a particular falsification scenario. The first algorithm always produces an optimal solution but, for reasons of efficiency, is suitable for short event logs only. To deal with the massive amount of data typically found in computer event logs, we also present a second heuristic algorithm which is considerably more efficient but may not always generate an optimal outcome.
Resumo:
When communicating emotion in music, composers and performers encode their expressive intentions through the control of basic musical features such as: pitch, loudness, timbre, mode, and articulation. The extent to which emotion can be controlled through the systematic manipulation of these features has not been fully examined. In this paper we present CMERS, a Computational Music Emotion Rule System for the control of perceived musical emotion that modifies features at the levels of score and performance in real-time. CMERS performance was evaluated in two rounds of perceptual testing. In experiment I, 20 participants continuously rated the perceived emotion of 15 music samples generated by CMERS. Three music works, each with five emotional variations were used (normal, happy, sad, angry, and tender). The intended emotion by CMERS was correctly identified 78% of the time, with significant shifts in valence and arousal also recorded, regardless of the works’ original emotion.
Resumo:
Surveillance systems such as object tracking and abandoned object detection systems typically rely on a single modality of colour video for their input. These systems work well in controlled conditions but often fail when low lighting, shadowing, smoke, dust or unstable backgrounds are present, or when the objects of interest are a similar colour to the background. Thermal images are not affected by lighting changes or shadowing, and are not overtly affected by smoke, dust or unstable backgrounds. However, thermal images lack colour information which makes distinguishing between different people or objects of interest within the same scene difficult. ----- By using modalities from both the visible and thermal infrared spectra, we are able to obtain more information from a scene and overcome the problems associated with using either modality individually. We evaluate four approaches for fusing visual and thermal images for use in a person tracking system (two early fusion methods, one mid fusion and one late fusion method), in order to determine the most appropriate method for fusing multiple modalities. We also evaluate two of these approaches for use in abandoned object detection, and propose an abandoned object detection routine that utilises multiple modalities. To aid in the tracking and fusion of the modalities we propose a modified condensation filter that can dynamically change the particle count and features used according to the needs of the system. ----- We compare tracking and abandoned object detection performance for the proposed fusion schemes and the visual and thermal domains on their own. Testing is conducted using the OTCBVS database to evaluate object tracking, and data captured in-house to evaluate the abandoned object detection. Our results show that significant improvement can be achieved, and that a middle fusion scheme is most effective.
Resumo:
Performance evaluation of object tracking systems is typically performed after the data has been processed, by comparing tracking results to ground truth. Whilst this approach is fine when performing offline testing, it does not allow for real-time analysis of the systems performance, which may be of use for live systems to either automatically tune the system or report reliability. In this paper, we propose three metrics that can be used to dynamically asses the performance of an object tracking system. Outputs and results from various stages in the tracking system are used to obtain measures that indicate the performance of motion segmentation, object detection and object matching. The proposed dynamic metrics are shown to accurately indicate tracking errors when visually comparing metric results to tracking output, and are shown to display similar trends to the ETISEO metrics when comparing different tracking configurations.
Resumo:
Games and related virtual environments have been a much-hyped area of the entertainment industry. The classic quote is that games are now approaching the size of Hollywood box office sales [1]. Books are now appearing that talk up the influence of games on business [2], and it is one of the key drivers of present hardware development. Some of this 3D technology is now embedded right down at the operating system level via the Windows Presentation Foundations – hit Windows/Tab on your Vista box to find out... In addition to this continued growth in the area of games, there are a number of factors that impact its development in the business community. Firstly, the average age of gamers is approaching the mid thirties. Therefore, a number of people who are in management positions in large enterprises are experienced in using 3D entertainment environments. Secondly, due to the pressure of demand for more computational power in both CPU and Graphical Processing Units (GPUs), your average desktop, any decent laptop, can run a game or virtual environment. In fact, the demonstrations at the end of this paper were developed at the Queensland University of Technology (QUT) on a standard Software Operating Environment, with an Intel Dual Core CPU and basic Intel graphics option. What this means is that the potential exists for the easy uptake of such technology due to 1. a broad range of workers being regularly exposed to 3D virtual environment software via games; 2. present desktop computing power now strong enough to potentially roll out a virtual environment solution across an entire enterprise. We believe such visual simulation environments can have a great impact in the area of business process modeling. Accordingly, in this article we will outline the communication capabilities of such environments, giving fantastic possibilities for business process modeling applications, where enterprises need to create, manage, and improve their business processes, and then communicate their processes to stakeholders, both process and non-process cognizant. The article then concludes with a demonstration of the work we are doing in this area at QUT.
Resumo:
The following paper presents an evaluation of airborne sensors for use in vegetation management in powerline corridors. Three integral stages in the management process are addressed including, the detection of trees, relative positioning with respect to the nearest powerline and vegetation height estimation. Image data, including multi-spectral and high resolution, are analyzed along with LiDAR data captured from fixed wing aircraft. Ground truth data is then used to establish the accuracy and reliability of each sensor thus providing a quantitative comparison of sensor options. Tree detection was achieved through crown delineation using a Pulse-Coupled Neural Network (PCNN) and morphologic reconstruction applied to multi-spectral imagery. Through testing it was shown to achieve a detection rate of 96%, while the accuracy in segmenting groups of trees and single trees correctly was shown to be 75%. Relative positioning using LiDAR achieved a RMSE of 1.4m and 2.1m for cross track distance and along track position respectively, while Direct Georeferencing achieved RMSE of 3.1m in both instances. The estimation of pole and tree heights measured with LiDAR had a RMSE of 0.4m and 0.9m respectively, while Stereo Matching achieved 1.5m and 2.9m. Overall a small number of poles were missed with detection rates of 98% and 95% for LiDAR and Stereo Matching.
Resumo:
This paper discusses the use of models in automatic computer forensic analysis, and proposes and elaborates on a novel model for use in computer profiling, the computer profiling object model. The computer profiling object model is an information model which models a computer as objects with various attributes and inter-relationships. These together provide the information necessary for a human investigator or an automated reasoning engine to make judgements as to the probable usage and evidentiary value of a computer system. The computer profiling object model can be implemented so as to support automated analysis to provide an investigator with the information needed to decide whether manual analysis is required.
Resumo:
Network Jamming systems provide real-time collaborative media performance experiences for novice or inexperienced users. In this paper we will outline the theoretical and developmental drivers for our Network Jamming software, called jam2jam. jam2jam employs generative algorithmic techniques with particular implications for accessibility and learning. We will describe how theories of engagement have directed the design and development of jam2jam and show how iterative testing cycles in numerous international sites have informed the evolution of the system and its educational potential. Generative media systems present an opportunity for users to leverage computational systems to make sense of complex media forms through interactive and collaborative experiences. Generative music and art are a relatively new phenomenon that use procedural invention as a creative technique to produce music and visual media. These kinds of systems present a range of affordances that can facilitate new kinds of relationships with music and media performance and production. Early systems have demonstrated the potential to provide access to collaborative ensemble experiences to users with little formal musical or artistic expertise.This presentation examines the educational affordances of these systems evidenced by field data drawn from the Network Jamming Project. These generative performance systems enable access to a unique kind of music/media’ ensemble performance with very little musical/ media knowledge or skill and they further offer the possibility of unique interactive relationships with artists and creative knowledge through collaborative performance. Through the process of observing, documenting and analysing young people interacting with the generative media software jam2jam a theory of meaningful engagement has emerged from the need to describe and codify how users experience creative engagement with music/media performance and the locations of meaning. In this research we observed that the musical metaphors and practices of ‘ensemble’ or collaborative performance and improvisation as a creative process for experienced musicians can be made available to novice users. The relational meanings of these musical practices afford access to high level personal, social and cultural experiences. Within the creative process of collaborative improvisation lie a series of modes of creative engagement that move from appreciation through exploration, selection, direction toward embodiment. The expressive sounds and visions made in real-time by improvisers collaborating are immediate and compelling. Generative media systems let novices access these experiences with simple interfaces that allow them to make highly professional and expressive sonic and visual content simply by using gestures and being attentive and perceptive to their collaborators. These kinds of experiences present the potential for highly complex expressive interactions with sound and media as a performance. Evidence that has emerged from this research suggest that collaborative performance with generative media is transformative and meaningful. In this presentation we draw out these ideas around an emerging theory of meaningful engagement that has evolved from the development of network jamming software. Primarily we focus on demonstrating how these experiences might lead to understandings that may be of educational and social benefit.
Resumo:
The Mobile Learning Kit is a new digital learning application that allows students and teachers to compose, publish, discuss and evaluate their own mobile learning games and events. The research field was interaction design in the context of mobile learning. The research methodology was primarily design-based supported by collaboration between participating disciplines of game design, education and information technology. As such, the resulting MiLK application is a synthesis of current pedagogical models and experimental interaction design techniques and technologies. MiLK is a dynamic learning resource for incorporating both formal and informal teaching and learning practices while exploiting mobile phones and contemporary digital social tools in innovative ways. MiLK explicitly addresses other predominant themes in educational scholarship that relate to current education innovation and reform such as personalised learning, life-long learning and new learning spaces. The success of this project is indicated through rigorous trials and actual uptake of MiLK by international participants in Australia, UK, US and South Africa. MiLK was awarded for excellence in the use of emerging technologies for improved learning and teaching as a finalist (top 3) in the Handheld Learning and Innovation Awards in the UK in 2008. MiLK was awarded funding from the Australasian CRC for Interaction Design in 2008 to prepare the MiLK application for development. MiLK has been awarded over $230,000 from ACID since 2006. The resulting application and research materials are now being commercialised by a new company, ‘ACID Services’.