216 resultados para 190202 Computer Gaming and Animation
em University of Queensland eSpace - Australia
Resumo:
For researchers investigating online communities, the existence of the internet has made the activities and opinions of community members visible in a public domain. FPS gaming culture is a highly literate culture - members communicate and represent themselves in textual forms online, and the culture makes use of a wide variety of communication and publishing technologies. While a significant amount of insider knowledge is required to understand and interpret such online content, a large body of material is available to researchers online, and sometimes provides more reliable and enlightening information than that generated by more traditional research methods. While the abundance of data available online in some ways makes research far easier, it also creates new dilemmas and challenges for researchers. What extra knowledge is required of the researcher? How can one ensure that one's interpretations of member statements are made with an understanding of meaning within that culture? What responsibilities does the researcher have in their representation of the culture under examination? What ethical issues must be considered?
Resumo:
The influence of initial perturbation geometry and material propel-ties on final fold geometry has been investigated using finite-difference (FLAC) and finite-element (MARC) numerical models. Previous studies using these two different codes reported very different folding behaviour although the material properties, boundary conditions and initial perturbation geometries were similar. The current results establish that the discrepancy was not due to the different computer codes but due to the different strain rates employed in the two previous studies (i.e. 10(-6) s(-1) in the FLAC models and 10(-14) s(-1) in the MARC models). As a result, different parts of the elasto-viscous rheological field were bring investigated. For the same material properties, strain rate and boundary conditions, the present results using the two different codes are consistent. A transition in Folding behaviour, from a situation where the geometry of initial perturbation determines final fold shape to a situation where material properties control the final geometry, is produced using both models. This transition takes place with increasing strain rate, decreasing elastic moduli or increasing viscosity (reflecting in each case the increasing influence of the elastic component in the Maxwell elastoviscous rheology). The transition described here is mechanically feasible but is associated with very high stresses in the competent layer (on the order of GPa), which is improbable under natural conditions. (C) 2000 Elsevier Science Ltd. All rights reserved.
Resumo:
This study integrated the research streams of computer-mediated communication (CMC) and group conflict by comparing the expression of different types of conflict in CMC groups and face-to face (FTF) groups over time. The main aim of the study was to compare the cues-filtered-out approach against the social information processing theory A laboratory study was conducted with 39 groups (19 CMC and 20 FTF) in which members were required to work together over three sessions. The frequencies of task, process, and relationship conflict were analyzed. Findings supported the social information processing theory. There was more process and relationship conflict in CMC groups compared to FTF groups on Day 1. However, this difference disappeared on Days 2 and 3. There was no difference between CMC and FTF groups in the amount of task conflict expressed on any day.
Resumo:
Formal specifications can precisely and unambiguously define the required behavior of a software system or component. However, formal specifications are complex artifacts that need to be verified to ensure that they are consistent, complete, and validated against the requirements. Specification testing or animation tools exist to assist with this by allowing the specifier to interpret or execute the specification. However, currently little is known about how to do this effectively. This article presents a framework and tool support for the systematic testing of formal, model-based specifications. Several important generic properties that should be satisfied by model-based specifications are first identified. Following the idea of mutation analysis, we then use variants or mutants of the specification to check that these properties are satisfied. The framework also allows the specifier to test application-specific properties. All properties are tested for a range of states that are defined by the tester in the form of a testgraph, which is a directed graph that partially models the states and transitions of the specification being tested. Tool support is provided for the generation of the mutants, for automatically traversing the testgraph and executing the test cases, and for reporting any errors. The framework is demonstrated on a small specification and its application to three larger specifications is discussed. Experience indicates that the framework can be used effectively to test small to medium-sized specifications and that it can reveal a significant number of problems in these specifications.
Resumo:
Objectives: To validate the WOMAC 3.1 in a touch screen computer format, which applies each question as a cartoon in writing and in speech (QUALITOUCH method), and to assess patient acceptance of the computer touch screen version. Methods: The paper and computer formats of WOMAC 3.1 were applied in random order to 53 subjects with hip or knee osteoarthritis. The mean age of the subjects was 64 years ( range 45 to 83), 60% were male, 53% were 65 years or older, and 53% used computers at home or at work. Agreement between formats was assessed by intraclass correlation coefficients (ICCs). Preferences were assessed with a supplementary questionnaire. Results: ICCs between formats were 0.92 (95% confidence interval, 0.87 to 0.96) for pain; 0.94 (0.90 to 0.97) for stiffness, and 0.96 ( 0.94 to 0.98) for function. ICCs were similar in men and women, in subjects with or without previous computer experience, and in subjects below or above age 65. The computer format was found easier to use by 26% of the subjects, the paper format by 8%, and 66% were undecided. Overall, 53% of subjects preferred the computer format, while 9% preferred the paper format, and 38% were undecided. Conclusion: The computer format of the WOMAC 3.1 is a reliable assessment tool. Agreement between computer and paper formats was independent of computer experience, age, or sex. Thus the computer format may help improve patient follow up by meeting patients' preferences and providing immediate results.
Resumo:
Achieving consistency between a specification and its implementation is an important part of software development In previous work, we have presented a method and tool support for testing a formal specification using animation and then verifying an implementation of that specification. The method is based on a testgraph, which provides a partial model of the application under test. The testgraph is used in combination with an animator to generate test sequences for testing the formal specification. The same testgraph is used during testing to execute those same sequences on the implementation and to ensure that the implementation conforms to the specification. So far, the method and its tool support have been applied to software components that can be accessed through an application programmer interface (API). In this paper, we use an industrially-based case study to discuss the problems associated with applying the method to a software system with a graphical user interface (GUI). In particular, the lack of a standardised interface, as well as controllability and observability problems, make it difficult to automate the testing of the implementation. The method can still be applied, but the amount of testing that can be carried on the implementation is limited by the manual effort involved.
Resumo:
Achieving consistency between a specification and its implementation is an important part of software development. In this paper, we present a method for generating passive test oracles that act as self-checking implementations. The implementation is verified using an animation tool to check that the behavior of the implementation matches the behavior of the specification. We discuss how to integrate this method into a framework developed for systematically animating specifications, which means a tester can significantly reduce testing time and effort by reusing work products from the animation. One such work product is a testgraph: a directed graph that partially models the states and transitions of the specification. Testgraphs are used to generate sequences for animation, and during testing, to execute these same sequences on the implementation.
Resumo:
We discuss a methodology for animating the Object-Z specification language using a Z animation environment. Central to the process is the introduction of a framework to handle dynamic instantiation of objects and management of object references. Particular focus is placed upon building the animation environment through pre-existing tools, and a case study is presented that implements the proposed framework using a shallow encoding in the Possum Z animator. The animation of Object-Z using Z is both automated and made transparent to the user through the use of a software tool named O-zone.
Resumo:
A Geographic Information System (GIS) was used to model datasets of Leyte Island, the Philippines, to identify land which was suitable for a forest extension program on the island. The datasets were modelled to provide maps of the distance of land from cities and towns, land which was a suitable elevation and slope for smallholder forestry and land of various soil types. An expert group was used to assign numeric site suitabilities to the soil types and maps of site suitability were used to assist the selection of municipalities for the provision of extension assistance to smallholders. Modelling of the datasets was facilitated by recent developments of the ArcGIS® suite of computer programs and derivation of elevation and slope was assisted by the availability of digital elevation models (DEM) produced by the Shuttle Radar Topography (SRTM) mission. The usefulness of GIS software as a decision support tool for small-scale forestry extension programs is discussed.
Resumo:
The second edition of An Introduction to Efficiency and Productivity Analysis is designed to be a general introduction for those who wish to study efficiency and productivity analysis. The book provides an accessible, well-written introduction to the four principal methods involved: econometric estimation of average response models; index numbers, data envelopment analysis (DEA); and stochastic frontier analysis (SFA). For each method, a detailed introduction to the basic concepts is presented, numerical examples are provided, and some of the more important extensions to the basic methods are discussed. Of special interest is the systematic use of detailed empirical applications using real-world data throughout the book. In recent years, there have been a number of excellent advance-level books published on performance measurement. This book, however, is the first systematic survey of performance measurement with the express purpose of introducing the field to a wide audience of students, researchers, and practitioners. Indeed, the 2nd Edition maintains its uniqueness: (1) It is a well-written introduction to the field. (2) It outlines, discusses and compares the four principal methods for efficiency and productivity analysis in a well-motivated presentation. (3) It provides detailed advice on computer programs that can be used to implement these performance measurement methods. The book contains computer instructions and output listings for the SHAZAM, LIMDEP, TFPIP, DEAP and FRONTIER computer programs. More extensive listings of data and computer instruction files are available on the book's website: (www.uq.edu.au/economics/cepa/crob2005).
Resumo:
Motivation: Prediction methods for identifying binding peptides could minimize the number of peptides required to be synthesized and assayed, and thereby facilitate the identification of potential T-cell epitopes. We developed a bioinformatic method for the prediction of peptide binding to MHC class II molecules. Results: Experimental binding data and expert knowledge of anchor positions and binding motifs were combined with an evolutionary algorithm (EA) and an artificial neural network (ANN): binding data extraction --> peptide alignment --> ANN training and classification. This method, termed PERUN, was implemented for the prediction of peptides that bind to HLA-DR4(B1*0401). The respective positive predictive values of PERUN predictions of high-, moderate-, low- and zero-affinity binder-a were assessed as 0.8, 0.7, 0.5 and 0.8 by cross-validation, and 1.0, 0.8, 0.3 and 0.7 by experimental binding. This illustrates the synergy between experimentation and computer modeling, and its application to the identification of potential immunotheraaeutic peptides.