79 resultados para Computer algorithms -- TFM
Resumo:
As the use of technological devices in everyday environments becomes more prevalent, it is clear that access to these devices has become an important aspect of occupational performance. Children are increasingly required to competently manipulate technology such as the computer to fulfil occupational roles of student and player. Occupational therapists are in a position to facilitate the successful interface between children and standard computer technologies. The literature has supported the use of direct manipulation interfaces in computing that requires mastery of devices such as the mouse. Identification of children likely to experience difficulties with mouse use will inform the development of appropriate methods of intervention promoting mouse skill and further enhance participation in occupational tasks. The aim of this paper is to discuss the development of an assessment of mouse proficiency for children. It describes the construction of the assessment, the content of the test, and its content validity.
Resumo:
Read-only-memory-based (ROM-based) quantum computation (QC) is an alternative to oracle-based QC. It has the advantages of being less magical, and being more suited to implementing space-efficient computation (i.e., computation using the minimum number of writable qubits). Here we consider a number of small (one- and two-qubit) quantum algorithms illustrating different aspects of ROM-based QC. They are: (a) a one-qubit algorithm to solve the Deutsch problem; (b) a one-qubit binary multiplication algorithm; (c) a two-qubit controlled binary multiplication algorithm; and (d) a two-qubit ROM-based version of the Deutsch-Jozsa algorithm. For each algorithm we present experimental verification using nuclear magnetic resonance ensemble QC. The average fidelities for the implementation were in the ranges 0.9-0.97 for the one-qubit algorithms, and 0.84-0.94 for the two-qubit algorithms. We conclude with a discussion of future prospects for ROM-based quantum computation. We propose a four-qubit algorithm, using Grover's iterate, for solving a miniature real-world problem relating to the lengths of paths in a network.
Resumo:
We detail the automatic construction of R matrices corresponding to (the tensor products of) the (O-m\alpha(n)) families of highest-weight representations of the quantum superalgebras Uq[gl(m\n)]. These representations are irreducible, contain a free complex parameter a, and are 2(mn)-dimensional. Our R matrices are actually (sparse) rank 4 tensors, containing a total of 2(4mn) components, each of which is in general an algebraic expression in the two complex variables q and a. Although the constructions are straightforward, we describe them in full here, to fill a perceived gap in the literature. As the algorithms are generally impracticable for manual calculation, we have implemented the entire process in MATHEMATICA; illustrating our results with U-q [gl(3\1)]. (C) 2002 Published by Elsevier Science B.V.
Resumo:
Recently, several groups have investigated quantum analogues of random walk algorithms, both on a line and on a circle. It has been found that the quantum versions have markedly different features to the classical versions. Namely, the variance on the line, and the mixing time on the circle increase quadratically faster in the quantum versions as compared to the classical versions. Here, we propose a scheme to implement the quantum random walk on a line and on a circle in an ion trap quantum computer. With current ion trap technology, the number of steps that could be experimentally implemented will be relatively small. However, we show how the enhanced features of these walks could be observed experimentally. In the limit of strong decoherence, the quantum random walk tends to the classical random walk. By measuring the degree to which the walk remains quantum, '' this algorithm could serve as an important benchmarking protocol for ion trap quantum computers.
Resumo:
This paper outlines research on the processes taking place within the coal mineral matter at high temperatures and development of the relationship between ash fusion temperatures (AFT) and phase equilibria of the coal ash slags. A new thermodynamic database for the Al-Ca-Fe-O-Si system developed by the author was used in conjunction with the thermodynamic computer package F*A*C*T for these purposes. In addition, high temperature experimental studies were undertaken that involved heat treatment and quenching of the ash cones followed by the analyses using different techniques. The study provided new information on the processes taking place during AFT test and demonstrated the validity of the AFTs predictions with F*A*C*T. Examples of practical applications of the AFT prediction method are given in the paper. The results of this study are important not only for the AFT predictions, but also in general for the application of phase equilibrium science to the characterisation of the coal mineral matter interactions at high temperature. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
This study integrated the research streams of computer-mediated communication (CMC) and group conflict by comparing the expression of different types of conflict in CMC groups and face-to face (FTF) groups over time. The main aim of the study was to compare the cues-filtered-out approach against the social information processing theory A laboratory study was conducted with 39 groups (19 CMC and 20 FTF) in which members were required to work together over three sessions. The frequencies of task, process, and relationship conflict were analyzed. Findings supported the social information processing theory. There was more process and relationship conflict in CMC groups compared to FTF groups on Day 1. However, this difference disappeared on Days 2 and 3. There was no difference between CMC and FTF groups in the amount of task conflict expressed on any day.
Resumo:
Using benthic habitat data from the Florida Keys (USA), we demonstrate how siting algorithms can help identify potential networks of marine reserves that comprehensively represent target habitat types. We applied a flexible optimization tool-simulated annealing-to represent a fixed proportion of different marine habitat types within a geographic area. We investigated the relative influence of spatial information, planning-unit size, detail of habitat classification, and magnitude of the overall conservation goal on the resulting network scenarios. With this method, we were able to identify many adequate reserve systems that met the conservation goals, e.g., representing at least 20% of each conservation target (i.e., habitat type) while fulfilling the overall aim of minimizing the system area and perimeter. One of the most useful types of information provided by this siting algorithm comes from an irreplaceability analysis, which is a count of the number of, times unique planning units were included in reserve system scenarios. This analysis indicated that many different combinations of sites produced networks that met the conservation goals. While individual 1-km(2) areas were fairly interchangeable, the irreplaceability analysis highlighted larger areas within the planning region that were chosen consistently to meet the goals incorporated into the algorithm. Additionally, we found that reserve systems designed with a high degree of spatial clustering tended to have considerably less perimeter and larger overall areas in reserve-a configuration that may be preferable particularly for sociopolitical reasons. This exercise illustrates the value of using the simulated annealing algorithm to help site marine reserves: the approach makes efficient use of;available resources, can be used interactively by conservation decision makers, and offers biologically suitable alternative networks from which an effective system of marine reserves can be crafted.
Resumo:
Computer Science is a subject which has difficulty in marketing itself. Further, pinning down a standard curriculum is difficult-there are many preferences which are hard to accommodate. This paper argues the case that part of the problem is the fact that, unlike more established disciplines, the subject does not clearly distinguish the study of principles from the study of artifacts. This point was raised in Curriculum 2001 discussions, and debate needs to start in good time for the next curriculum standard. This paper provides a starting point for debate, by outlining a process by which principles and artifacts may be separated, and presents a sample curriculum to illustrate the possibilities. This sample curriculum has some positive points, though these positive points are incidental to the need to start debating the issue. Other models, with a less rigorous ordering of principles before artifacts, would still gain from making it clearer whether a specific concept was fundamental, or a property of a specific technology. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
We present an abstract model of the leader election protocol used in the IEEE 1394 High Performance Serial Bus standard. The model is expressed in the probabilistic Guarded Command Language. By formal reasoning based on this description, we establish the probability of the root contention part of the protocol successfully terminating in terms of the number of attempts to do so. Some simple calculations then allow us to establish an upper bound on the time taken for those attempts.
Resumo:
In microarray studies, the application of clustering techniques is often used to derive meaningful insights into the data. In the past, hierarchical methods have been the primary clustering tool employed to perform this task. The hierarchical algorithms have been mainly applied heuristically to these cluster analysis problems. Further, a major limitation of these methods is their inability to determine the number of clusters. Thus there is a need for a model-based approach to these. clustering problems. To this end, McLachlan et al. [7] developed a mixture model-based algorithm (EMMIX-GENE) for the clustering of tissue samples. To further investigate the EMMIX-GENE procedure as a model-based -approach, we present a case study involving the application of EMMIX-GENE to the breast cancer data as studied recently in van 't Veer et al. [10]. Our analysis considers the problem of clustering the tissue samples on the basis of the genes which is a non-standard problem because the number of genes greatly exceed the number of tissue samples. We demonstrate how EMMIX-GENE can be useful in reducing the initial set of genes down to a more computationally manageable size. The results from this analysis also emphasise the difficulty associated with the task of separating two tissue groups on the basis of a particular subset of genes. These results also shed light on why supervised methods have such a high misallocation error rate for the breast cancer data.
Resumo:
The Test of Mouse Proficiency (TOMP) was developed to assist occupational therapists and education professionals assess computer mouse competency skills in children from preschool to upper primary (elementary) school age. The preliminary reliability and validity of TOMP are reported in this paper. Methods used to examine the internal consistency, test-retest reliability, and criterion- and construct-related validity of the test are elaborated. In the continuing process of test refinement, these preliminary studies support to varying degrees the reliability and validity of TOMP. Recommendations for further validation of the assessment are discussed along with indications for potential clinical application.
Resumo:
The article describes an attempt to improve student learning outcomes in a computer networks course by making lectures more active learning experiences. Quick quizzes, group and individual exercises, the review of student questions, as well as multiple breaks, were incorporated into the weekly three-hour lectures. Student responses to the modified lectures was overwhelmingly positive: over 85% of respondents agreed that the lectures aided understanding, with large majorities of the respondents finding the individual activities useful to their learning. Although student examination performance improved over the previous year, performance on an examination question that was designed to examine deep understanding remained unchanged.
Resumo:
In this paper we propose a second linearly scalable method for solving large master equations arising in the context of gas-phase reactive systems. The new method is based on the well-known shift-invert Lanczos iteration using the GMRES iteration preconditioned using the diffusion approximation to the master equation to provide the inverse of the master equation matrix. In this way we avoid the cubic scaling of traditional master equation solution methods while maintaining the speed of a partial spectral decomposition. The method is tested using a master equation modeling the formation of propargyl from the reaction of singlet methylene with acetylene, proceeding through long-lived isomerizing intermediates. (C) 2003 American Institute of Physics.