932 resultados para 280202 Computer Graphics
Resumo:
As the graphics race subsides and gamers grow weary of predictable and deterministic game characters, game developers must put aside their “old faithful” finite state machines and look to more advanced techniques that give the users the gaming experience they crave. The next industry breakthrough will be with characters that behave realistically and that can learn and adapt, rather than more polygons, higher resolution textures and more frames-per-second. This paper explores the various artificial intelligence techniques that are currently being used by game developers, as well as techniques that are new to the industry. The techniques covered in this paper are finite state machines, scripting, agents, flocking, fuzzy logic and fuzzy state machines decision trees, neural networks, genetic algorithms and extensible AI. This paper introduces each of these technique, explains how they can be applied to games and how commercial games are currently making use of them. Finally, the effectiveness of these techniques and their future role in the industry are evaluated.
Resumo:
The construction of timelines of computer activity is a part of many digital investigations. These timelines of events are composed of traces of historical activity drawn from system logs and potentially from evidence of events found in the computer file system. A potential problem with the use of such information is that some of it may be inconsistent and contradictory thus compromising its value. This work introduces a software tool (CAT Detect) for the detection of inconsistency within timelines of computer activity. We examine the impact of deliberate tampering through experiments conducted with our prototype software tool. Based on the results of these experiments, we discuss techniques which can be employed to deal with such temporal inconsistencies.
Resumo:
This workshop is a continuation and extension to the successful past workshops exploring the intersection of food, technology, place, and people, namely 2009 OZCHI workshop, Hungry 24/7? HCI Design for Sustainable Food Culture and Sustainable Interaction with Food, Technology, and the City [1] and 2010 CHI panel Making Food, Producing Sustainability [3]. The workshop aims to bring together experts from diverse backgrounds including academia, government, industry, and non-for-profit organisations. It specifically aims to create a space for discussion and design of innovative approaches to understanding and cultivating sustainable food practices via human-computer-interaction (HCI) as well as addressing the wider opportunities for the HCI community to engage with food as a key issue for sustainability The workshop addresses environmental, health, and social domains of sustainability in particular, by looking at various conceptual and design approaches in orchestrating sustainable interaction of people and food in and through dynamic techno-social networks.
Resumo:
-
Resumo:
This study investigated whether conceptual development is greater if students learning senior chemistry hear teacher explanations and other traditional teaching approaches first then see computer based visualizations or vice versa. Five Canadian chemistry classes, taught by three different teachers, studied the topics of Le Chatelier’s Principle and dynamic chemical equilibria using scientific visualizations with the explanation and visualizations in different orders. Conceptual development was measured using a 12 item test based on the Chemistry Concepts Inventory. Data was obtained about the students’ abilities, learning styles (auditory, visual or kinesthetic) and sex, and the relationships between these factors and conceptual development due to the teaching sequences were investigated. It was found that teaching sequence is not important in terms of students’ conceptual learning gains, across the whole cohort or for any of the three subgroups.
Resumo:
Students struggle with learning to program. In recent years, not only has there been a dramatic drop in the number of students enrolling in IT and Computer Science courses, but attrition from these courses continues to be significant. Introductory programming subjects traditionally have high failure rates and as they tend to be core to IT and Computer Science courses can be a road block for many students to their university studies. Is programming really that difficult — or are there other barriers to learning that have a serious and detrimental effect on student progression? In-class experiments were conducted in introductory programming units to confirm our hypothesis that that pair-programming would benefit students' learning to program. We investigated the social and cultural barriers to learning programming by questioning students' perceptions of confidence, difficulty and enjoyment of programming. The results of paired and non-paired students were compared to determine the effect of pair-programming on learning outcomes. Both the empirical and anecdotal results of our experiments strongly supported our hypothesis.
Resumo:
Visual modes of representation have always been very important in science and science education. Interactive computer-based animations and simulations offer new visual resources for chemistry education. Many studies have shown that students enjoy learning with visualisations but few have explored how learning outcomes compare when teaching with or without visualisations. This study employs a quasi-experimental crossover research design and quantitative methods to measure the educational effectiveness - defined as level of conceptual development on the part of students - of using computer-based scientific visualisations versus teaching without visualisations in teaching chemistry. In addition to finding that teaching with visualisations offered outcomes that were not significantly different from teaching without visualisations, the study also explored differences in outcomes for male and female students, students with different learning styles (visual, aural, kinesthetic) and students of differing levels of academic ability.
Resumo:
Virtual environments can provide, through digital games and online social interfaces, extremely exciting forms of interactive entertainment. Because of their capability in displaying and manipulating information in natural and intuitive ways, such environments have found extensive applications in decision support, education and training in the health and science domains amongst others. Currently, the burden of validating both the interactive functionality and visual consistency of a virtual environment content is entirely carried out by developers and play-testers. While considerable research has been conducted in assisting the design of virtual world content and mechanics, to date, only limited contributions have been made regarding the automatic testing of the underpinning graphics software and hardware. The aim of this thesis is to determine whether the correctness of the images generated by a virtual environment can be quantitatively defined, and automatically measured, in order to facilitate the validation of the content. In an attempt to provide an environment-independent definition of visual consistency, a number of classification approaches were developed. First, a novel model-based object description was proposed in order to enable reasoning about the color and geometry change of virtual entities during a play-session. From such an analysis, two view-based connectionist approaches were developed to map from geometry and color spaces to a single, environment-independent, geometric transformation space; we used such a mapping to predict the correct visualization of the scene. Finally, an appearance-based aliasing detector was developed to show how incorrectness too, can be quantified for debugging purposes. Since computer games heavily rely on the use of highly complex and interactive virtual worlds, they provide an excellent test bed against which to develop, calibrate and validate our techniques. Experiments were conducted on a game engine and other virtual worlds prototypes to determine the applicability and effectiveness of our algorithms. The results show that quantifying visual correctness in virtual scenes is a feasible enterprise, and that effective automatic bug detection can be performed through the techniques we have developed. We expect these techniques to find application in large 3D games and virtual world studios that require a scalable solution to testing their virtual world software and digital content.
Resumo:
Internet and computer addiction has been a popular research area since the 90s. Studies on Internet and computer addiction have usually been conducted in the US, and the investigation of computer and Internet addiction at different countries is an interesting area of research. This study investigates computer and Internet addiction among teenagers and Internet cafe visitors in Turkey. We applied a survey to 983 visitors in the Internet cafes. The results show that the Internet cafe visitors are usually teenagers, mostly middle and high-school students and usually are busy with computer and Internet applications like chat, e-mail, browsing and games. The teenagers come to the Internet cafe to spend time with friends and the computers. In addition, about 30% of cafe visitors admit to having an Internet addiction, and about 20% specifically mention the problems that they are having with the Internet. It is rather alarming to consider the types of activities that the teenagers are performing in an Internet cafe, their reasons for being there, the percentage of self-awareness about Internet addiction, and the lack of control of applications in the cafe.
Resumo:
The ability to decode graphics is an increasingly important component of mathematics assessment and curricula. This study examined 50, 9- to 10-year-old students (23 male, 27 female), as they solved items from six distinct graphical languages (e.g., maps) that are commonly used to convey mathematical information. The results of the study revealed: 1) factors which contribute to success or hinder performance on tasks with various graphical representations; and 2) how the literacy and graphical demands of tasks influence the mathematical sense making of students. The outcomes of this study highlight the changing nature of assessment in school mathematics and identify the function and influence of graphics in the design of assessment tasks.
Resumo:
The growth of solid tumours beyond a critical size is dependent upon angiogenesis, the formation of new blood vessels from an existing vasculature. Tumours may remain dormant at microscopic sizes for some years before switching to a mode in which growth of a supportive vasculature is initiated. The new blood vessels supply nutrients, oxygen, and access to routes by which tumour cells may travel to other sites within the host (metastasize). In recent decades an abundance of biological research has focused on tumour-induced angiogenesis in the hope that treatments targeted at the vasculature may result in a stabilisation or regression of the disease: a tantalizing prospect. The complex and fascinating process of angiogenesis has also attracted the interest of researchers in the field of mathematical biology, a discipline that is, for mathematics, relatively new. The challenge in mathematical biology is to produce a model that captures the essential elements and critical dependencies of a biological system. Such a model may ultimately be used as a predictive tool. In this thesis we examine a number of aspects of tumour-induced angiogenesis, focusing on growth of the neovasculature external to the tumour. Firstly we present a one-dimensional continuum model of tumour-induced angiogenesis in which elements of the immune system or other tumour-cytotoxins are delivered via the newly formed vessels. This model, based on observations from experiments by Judah Folkman et al., is able to show regression of the tumour for some parameter regimes. The modelling highlights a number of interesting aspects of the process that may be characterised further in the laboratory. The next model we present examines the initiation positions of blood vessel sprouts on an existing vessel, in a two-dimensional domain. This model hypothesises that a simple feedback inhibition mechanism may be used to describe the spacing of these sprouts with the inhibitor being produced by breakdown of the existing vessel's basement membrane. Finally, we have developed a stochastic model of blood vessel growth and anastomosis in three dimensions. The model has been implemented in C++, includes an openGL interface, and uses a novel algorithm for calculating proximity of the line segments representing a growing vessel. This choice of programming language and graphics interface allows for near-simultaneous calculation and visualisation of blood vessel networks using a contemporary personal computer. In addition the visualised results may be transformed interactively, and drop-down menus facilitate changes in the parameter values. Visualisation of results is of vital importance in the communication of mathematical information to a wide audience, and we aim to incorporate this philosophy in the thesis. As biological research further uncovers the intriguing processes involved in tumourinduced angiogenesis, we conclude with a comment from mathematical biologist Jim Murray, Mathematical biology is : : : the most exciting modern application of mathematics.
Resumo:
The Graphics-Decoding Proficiency (G-DP) instrument was developed as a screening test for the purpose of measuring students’ (aged 8-11 years) capacity to solve graphics-based mathematics tasks. These tasks include number lines, column graphs, maps and pie charts. The instrument was developed within a theoretical framework which highlights the various types of information graphics commonly presented to students in large-scale national and international assessments. The instrument provides researchers, classroom teachers and test designers with an assessment tool which measures students’ graphics decoding proficiency across and within five broad categories of information graphics. The instrument has implications for a number of stakeholders in an era where graphics have become an increasingly important way of representing information.
Resumo:
Existing secure software development principles tend to focus on coding vulnerabilities, such as buffer or integer overflows, that apply to individual program statements, or issues associated with the run-time environment, such as component isolation. Here we instead consider software security from the perspective of potential information flow through a program’s object-oriented module structure. In particular, we define a set of quantifiable "security metrics" which allow programmers to quickly and easily assess the overall security of a given source code program or object-oriented design. Although measuring quality attributes of object-oriented programs for properties such as maintainability and performance has been well-covered in the literature, metrics which measure the quality of information security have received little attention. Moreover, existing securityrelevant metrics assess a system either at a very high level, i.e., the whole system, or at a fine level of granularity, i.e., with respect to individual statements. These approaches make it hard and expensive to recognise a secure system from an early stage of development. Instead, our security metrics are based on well-established compositional properties of object-oriented programs (i.e., data encapsulation, cohesion, coupling, composition, extensibility, inheritance and design size), combined with data flow analysis principles that trace potential information flow between high- and low-security system variables. We first define a set of metrics to assess the security quality of a given object-oriented system based on its design artifacts, allowing defects to be detected at an early stage of development. We then extend these metrics to produce a second set applicable to object-oriented program source code. The resulting metrics make it easy to compare the relative security of functionallyequivalent system designs or source code programs so that, for instance, the security of two different revisions of the same system can be compared directly. This capability is further used to study the impact of specific refactoring rules on system security more generally, at both the design and code levels. By measuring the relative security of various programs refactored using different rules, we thus provide guidelines for the safe application of refactoring steps to security-critical programs. Finally, to make it easy and efficient to measure a system design or program’s security, we have also developed a stand-alone software tool which automatically analyses and measures the security of UML designs and Java program code. The tool’s capabilities are demonstrated by applying it to a number of security-critical system designs and Java programs. Notably, the validity of the metrics is demonstrated empirically through measurements that confirm our expectation that program security typically improves as bugs are fixed, but worsens as new functionality is added.