919 resultados para Computer Game Testing


Relevância:

30.00% 30.00%

Publicador:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Achieving consistency between a specification and its implementation is an important part of software development. In this paper, we present a method for generating passive test oracles that act as self-checking implementations. The implementation is verified using an animation tool to check that the behavior of the implementation matches the behavior of the specification. We discuss how to integrate this method into a framework developed for systematically animating specifications, which means a tester can significantly reduce testing time and effort by reusing work products from the animation. One such work product is a testgraph: a directed graph that partially models the states and transitions of the specification. Testgraphs are used to generate sequences for animation, and during testing, to execute these same sequences on the implementation.

Relevância:

30.00% 30.00%

Publicador:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Issues of wear and tribology are increasingly important in computer hard drives as slider flying heights are becoming lower and disk protective coatings thinner to minimise spacing loss and allow higher areal density. Friction, stiction and wear between the slider and disk in a hard drive were studied using Accelerated Friction Test (AFT) apparatus. Contact Start Stop (CSS) and constant speed drag tests were performed using commercial rigid disks and two different air bearing slider types. Friction and stiction were captured during testing by a set of strain gauges. System parameters were varied to investigate their effect on tribology at the head/disk interface. Chosen parameters were disk spinning velocity, slider fly height, temperature, humidity and intercycle pause. The effect of different disk texturing methods was also studied. Models were proposed to explain the influence of these parameters on tribology. Atomic Force Microscopy (AFM) and Scanning Electron Microscopy (SEM) were used to study head and disk topography at various test stages and to provide physical parameters to verify the models. X-ray Photoelectron Spectroscopy (XPS) was employed to identify surface composition and determine if any chemical changes had occurred as a result of testing. The parameters most likely to influence the interface were identified for both CSS and drag testing. Neural Network modelling was used to substantiate results. Topographical AFM scans of disk and slider were exported numerically to file and explored extensively. Techniques were developed which improved line and area analysis. A method for detecting surface contacts was also deduced, results supported and explained observed AFT behaviour. Finally surfaces were computer generated to simulate real disk scans, this allowed contact analysis of many types of surface to be performed. Conclusions were drawn about what disk characteristics most affected contacts and hence friction, stiction and wear.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study examined the effects of computer assisted instruction (CAI) 1 hour per week for 18 weeks on changes in computational scores and attitudes of developmental mathematics students at schools with predominantly Black enrollment. Comparisons were made between students using CAI with differing software--PLATO, CSR or both together--and students using traditional instruction (TI) only.^ This study was conducted in the Dade County Public School System from February through June 1991, at two senior high schools. The dependent variables, the State Student Assessment Test (SSAT), and the School Subjects Attitude Scales (SSAS), measured students' computational scores and attitudes toward mathematics in 3 categories: interest, usefulness, and difficulty, respectively.^ Univariate analyses of variance were performed on the least squares mean differences from pretest to posttest for testing main effects and interactions. A t-test measured significant main effects and interactions. Results were interpreted at the.01 level of significance.^ Null hypotheses 1, 2, and 3 compared versions of CAI with the control group, for changes in mathematical computation scores measured with the SSAT. It could not be concluded that changes in standardized mathematics test scores of students using CAI with differing software 1 hour per week for 18 class hours combined with TI were significantly higher than changes in test scores for students receiving TI only.^ Null hypotheses 4, 5, and 6 tested the effects of CAI for attitudes toward mathematics for experimental groups against control groups measured with the SSAS. Changes in attitudes toward mathematics of students using CAI with differing software 1 hour per week for 18 class hours combined with TI were not significantly higher than attitude changes for students receiving TI only.^ Teacher effect on students' computational scores was a more influential variable than CAI. No interaction was found between gender and learning method on standardized mathematics test scores (null hypothesis 7). ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research pursued the conceptualization, implementation, and verification of a system that enhances digital information displayed on an LCD panel to users with visual refractive errors. The target user groups for this system are individuals who have moderate to severe visual aberrations for which conventional means of compensation, such as glasses or contact lenses, does not improve their vision. This research is based on a priori knowledge of the user's visual aberration, as measured by a wavefront analyzer. With this information it is possible to generate images that, when displayed to this user, will counteract his/her visual aberration. The method described in this dissertation advances the development of techniques for providing such compensation by integrating spatial information in the image as a means to eliminate some of the shortcomings inherent in using display devices such as monitors or LCD panels. Additionally, physiological considerations are discussed and integrated into the method for providing said compensation. In order to provide a realistic sense of the performance of the methods described, they were tested by mathematical simulation in software, as well as by using a single-lens high resolution CCD camera that models an aberrated eye, and finally with human subjects having various forms of visual aberrations. Experiments were conducted on these systems and the data collected from these experiments was evaluated using statistical analysis. The experimental results revealed that the pre-compensation method resulted in a statistically significant improvement in vision for all of the systems. Although significant, the improvement was not as large as expected for the human subject tests. Further analysis suggest that even under the controlled conditions employed for testing with human subjects, the characterization of the eye may be changing. This would require real-time monitoring of relevant variables (e.g. pupil diameter) and continuous adjustment in the pre-compensation process to yield maximum viewing enhancement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ensuring the correctness of software has been the major motivation in software research, constituting a Grand Challenge. Due to its impact in the final implementation, one critical aspect of software is its architectural design. By guaranteeing a correct architectural design, major and costly flaws can be caught early on in the development cycle. Software architecture design has received a lot of attention in the past years, with several methods, techniques and tools developed. However, there is still more to be done, such as providing adequate formal analysis of software architectures. On these regards, a framework to ensure system dependability from design to implementation has been developed at FIU (Florida International University). This framework is based on SAM (Software Architecture Model), an ADL (Architecture Description Language), that allows hierarchical compositions of components and connectors, defines an architectural modeling language for the behavior of components and connectors, and provides a specification language for the behavioral properties. The behavioral model of a SAM model is expressed in the form of Petri nets and the properties in first order linear temporal logic.^ This dissertation presents a formal verification and testing approach to guarantee the correctness of Software Architectures. The Software Architectures studied are expressed in SAM. For the formal verification approach, the technique applied was model checking and the model checker of choice was Spin. As part of the approach, a SAM model is formally translated to a model in the input language of Spin and verified for its correctness with respect to temporal properties. In terms of testing, a testing approach for SAM architectures was defined which includes the evaluation of test cases based on Petri net testing theory to be used in the testing process at the design level. Additionally, the information at the design level is used to derive test cases for the implementation level. Finally, a modeling and analysis tool (SAM tool) was implemented to help support the design and analysis of SAM models. The results show the applicability of the approach to testing and verification of SAM models with the aid of the SAM tool.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research involves the design, development, and theoretical demonstration of models resulting in integrated misbehavior resolution protocols for ad hoc networked devices. Game theory was used to analyze strategic interaction among independent devices with conflicting interests. Packet forwarding at the routing layer of autonomous ad hoc networks was investigated. Unlike existing reputation based or payment schemes, this model is based on repeated interactions. To enforce cooperation, a community enforcement mechanism was used, whereby selfish nodes that drop packets were punished not only by the victim, but also by all nodes in the network. Then, a stochastic packet forwarding game strategy was introduced. Our solution relaxed the uniform traffic demand that was pervasive in other works. To address the concerns of imperfect private monitoring in resource aware ad hoc networks, a belief-free equilibrium scheme was developed that reduces the impact of noise in cooperation. This scheme also eliminated the need to infer the private history of other nodes. Moreover, it simplified the computation of an optimal strategy. The belief-free approach reduced the node overhead and was easily tractable. Hence it made the system operation feasible. Motivated by the versatile nature of evolutionary game theory, the assumption of a rational node is relaxed, leading to the development of a framework for mitigating routing selfishness and misbehavior in Multi hop networks. This is accomplished by setting nodes to play a fixed strategy rather than independently choosing a rational strategy. A range of simulations was carried out that showed improved cooperation between selfish nodes when compared to older results. Cooperation among ad hoc nodes can also protect a network from malicious attacks. In the absence of a central trusted entity, many security mechanisms and privacy protections require cooperation among ad hoc nodes to protect a network from malicious attacks. Therefore, using game theory and evolutionary game theory, a mathematical framework has been developed that explores trust mechanisms to achieve security in the network. This framework is one of the first steps towards the synthesis of an integrated solution that demonstrates that security solely depends on the initial trust level that nodes have for each other.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The lack of analytical models that can accurately describe large-scale networked systems makes empirical experimentation indispensable for understanding complex behaviors. Research on network testbeds for testing network protocols and distributed services, including physical, emulated, and federated testbeds, has made steady progress. Although the success of these testbeds is undeniable, they fail to provide: 1) scalability, for handling large-scale networks with hundreds or thousands of hosts and routers organized in different scenarios, 2) flexibility, for testing new protocols or applications in diverse settings, and 3) inter-operability, for combining simulated and real network entities in experiments. This dissertation tackles these issues in three different dimensions. First, we present SVEET, a system that enables inter-operability between real and simulated hosts. In order to increase the scalability of networks under study, SVEET enables time-dilated synchronization between real hosts and the discrete-event simulator. Realistic TCP congestion control algorithms are implemented in the simulator to allow seamless interactions between real and simulated hosts. SVEET is validated via extensive experiments and its capabilities are assessed through case studies involving real applications. Second, we present PrimoGENI, a system that allows a distributed discrete-event simulator, running in real-time, to interact with real network entities in a federated environment. PrimoGENI greatly enhances the flexibility of network experiments, through which a great variety of network conditions can be reproduced to examine what-if questions. Furthermore, PrimoGENI performs resource management functions, on behalf of the user, for instantiating network experiments on shared infrastructures. Finally, to further increase the scalability of network testbeds to handle large-scale high-capacity networks, we present a novel symbiotic simulation approach. We present SymbioSim, a testbed for large-scale network experimentation where a high-performance simulation system closely cooperates with an emulation system in a mutually beneficial way. On the one hand, the simulation system benefits from incorporating the traffic metadata from real applications in the emulation system to reproduce the realistic traffic conditions. On the other hand, the emulation system benefits from receiving the continuous updates from the simulation system to calibrate the traffic between real applications. Specific techniques that support the symbiotic approach include: 1) a model downscaling scheme that can significantly reduce the complexity of the large-scale simulation model, resulting in an efficient emulation system for modulating the high-capacity network traffic between real applications; 2) a queuing network model for the downscaled emulation system to accurately represent the network effects of the simulated traffic; and 3) techniques for reducing the synchronization overhead between the simulation and emulation systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to compare the effects of three student response conditions during computer-assisted instruction on the acquisition and maintenance of social-studies facts. Two of the conditions required active student responding (ASR), whereas the other required an on-task (OT) response. Participants were five fifth-grade students, with learning disabilities enrolled in a private school. An alternating treatments design with a best treatments phase was used to compare the effects of the response procedures on three major dependent measures: same-day tests, next-day tests, and maintenance tests. ^ Each week for six weeks, participants were provided daily one-to-one instruction on sets of 21 unknown social-studies facts using a hypermedia computer program, with a new set of facts being practiced each week. Each set of 21 facts was divided randomly into three conditions: Clicking-ASR, Repeating-ASR, and Listening-OT. Hypermedia lesson began weekly with the concept introduction lesson, followed by practice and testing. Practice and testing occurred four days per week, per set. During Clicking-ASR, student practice involved the selection of a social-studies response by clicking on an item with the mouse on the hypermedia card. Repeating-ASR instruction required students to orally repeat the social-studies facts when prompted by the computer. During Listening-OT, students listened to the social-studies facts being read by the computer. During weeks seven and eight, instruction occurred with seven unknown facts using only the best treatment. ^ Test results show that all for all 5 students, the Repeating-ASR practice procedure resulted in more social-studies facts stated correctly on same-day tests, next-day tests, and one-and two-week maintenance tests. Clicking-ASR was the next most effective procedure. During the seventh and eighth week of instruction when only the best practice condition was implemented, Repeating-ASR produced higher scores than all conditions (including Repeating-ASR) during the first six weeks of the study. ^ The results lend further support to the growing body of literature that demonstrates the positive relation between ASR and student achievement. Much of the ASR literature has focused on the effects of increased ASR during teacher-led or peer-mediated instruction. This study adds a dimension to that research in that it demonstrated the importance of ASR during computer-assisted instruction and further suggests that the type of ASR used during computer-assisted instruction may influence learning. Future research is needed to investigate the effectiveness of other types of ASR during computer-assisted instruction and to identify other fundamental characteristics of an effective computer-assisted instruction. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As researchers and practitioners move towards a vision of software systems that configure, optimize, protect, and heal themselves, they must also consider the implications of such self-management activities on software reliability. Autonomic computing (AC) describes a new generation of software systems that are characterized by dynamically adaptive self-management features. During dynamic adaptation, autonomic systems modify their own structure and/or behavior in response to environmental changes. Adaptation can result in new system configurations and capabilities, which need to be validated at runtime to prevent costly system failures. However, although the pioneers of AC recognize that validating autonomic systems is critical to the success of the paradigm, the architectural blueprint for AC does not provide a workflow or supporting design models for runtime testing. ^ This dissertation presents a novel approach for seamlessly integrating runtime testing into autonomic software. The approach introduces an implicit self-test feature into autonomic software by tailoring the existing self-management infrastructure to runtime testing. Autonomic self-testing facilitates activities such as test execution, code coverage analysis, timed test performance, and post-test evaluation. In addition, the approach is supported by automated testing tools, and a detailed design methodology. A case study that incorporates self-testing into three autonomic applications is also presented. The findings of the study reveal that autonomic self-testing provides a flexible approach for building safe, reliable autonomic software, while limiting the development and performance overhead through software reuse. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study examined the interaction of age, attitude, and performance within the context of an interactive computer testing experience. Subjects were 13 males and 47 females between the ages of 55 and 82, with a minimum of a high school education. Initial attitudes toward computers, as measured by the Cybernetics Attitude Scale (CAS), demonstrated overall equivalence between these older subjects and previously tested younger subjects. Post-intervention scores on the CAS indicated that attitudes toward computers were unaffected by either a "fun" or a "challenging" computer interaction experience. The differential effects of a computerized vs. a paperand- pencil presentation format of a 20-item, multiple choice vocabulary test were examined. Results indicated no significant differences in the performance of subjects in the two conditions, and no interaction effect between attitude and performance. These findings suggest that the attitudes of older adults towards computers do not affect their computerized testing performance, at least for short term testing of verbal abilities. A further implication is that, under the conditions presented here, older subjects appear to be unaffected by mode of testing. The impact of recent advanced in technology on older adults is discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research pursued the conceptualization, implementation, and verification of a system that enhances digital information displayed on an LCD panel to users with visual refractive errors. The target user groups for this system are individuals who have moderate to severe visual aberrations for which conventional means of compensation, such as glasses or contact lenses, does not improve their vision. This research is based on a priori knowledge of the user's visual aberration, as measured by a wavefront analyzer. With this information it is possible to generate images that, when displayed to this user, will counteract his/her visual aberration. The method described in this dissertation advances the development of techniques for providing such compensation by integrating spatial information in the image as a means to eliminate some of the shortcomings inherent in using display devices such as monitors or LCD panels. Additionally, physiological considerations are discussed and integrated into the method for providing said compensation. In order to provide a realistic sense of the performance of the methods described, they were tested by mathematical simulation in software, as well as by using a single-lens high resolution CCD camera that models an aberrated eye, and finally with human subjects having various forms of visual aberrations. Experiments were conducted on these systems and the data collected from these experiments was evaluated using statistical analysis. The experimental results revealed that the pre-compensation method resulted in a statistically significant improvement in vision for all of the systems. Although significant, the improvement was not as large as expected for the human subject tests. Further analysis suggest that even under the controlled conditions employed for testing with human subjects, the characterization of the eye may be changing. This would require real-time monitoring of relevant variables (e.g. pupil diameter) and continuous adjustment in the pre-compensation process to yield maximum viewing enhancement.