943 resultados para Arithmetic.
Mental computation : the identification of associated cognitive, metacognitive and affective factors
Resumo:
While it is commonly accepted that computability on a Turing machine in polynomial time represents a correct formalization of the notion of a feasibly computable function, there is no similar agreement on how to extend this notion on functionals, that is, what functionals should be considered feasible. One possible paradigm was introduced by Mehlhorn, who extended Cobham's definition of feasible functions to type 2 functionals. Subsequently, this class of functionals (with inessential changes of the definition) was studied by Townsend who calls this class POLY, and by Kapron and Cook who call the same class basic feasible functionals. Kapron and Cook gave an oracle Turing machine model characterisation of this class. In this article, we demonstrate that the class of basic feasible functionals has recursion theoretic properties which naturally generalise the corresponding properties of the class of feasible functions, thus giving further evidence that the notion of feasibility of functionals mentioned above is correctly chosen. We also improve the Kapron and Cook result on machine representation.Our proofs are based on essential applications of logic. We introduce a weak fragment of second order arithmetic with second order variables ranging over functions from NN which suitably characterises basic feasible functionals, and show that it is a useful tool for investigating the properties of basic feasible functionals. In particular, we provide an example how one can extract feasible programs from mathematical proofs that use nonfeasible functions.
Resumo:
Learning to operate algebraically is a complex process that is dependent upon extending arithmetic knowledge to the more complex concepts of algebra. Current research has shown a gap between arithmetic and algebraic knowledge and suggests a pre-algebraic level as a step between the two knowledge types. This paper examines arithmetic and algebraic knowledge from a cognitive perspective in an effort to determine what constitutes a pre-algebraic level of understanding. Results of a longitudinal study designed to investigate students' readiness for algebra are presented. Thirty-three students in Grades 7, 8, and 9 participated. A model for the transition from arithmetic to pre-algebra to algebra is proposed and students' understanding of relevant knowledge is discussed.
Resumo:
Different international plant protection organisations advocate different schemes for conducting pest risk assessments. Most of these schemes use structured questionnaire in which experts are asked to score several items using an ordinal scale. The scores are then combined using a range of procedures, such as simple arithmetic mean, weighted averages, multiplication of scores, and cumulative sums. The most useful schemes will correctly identify harmful pests and identify ones that are not. As the quality of a pest risk assessment can depend on the characteristics of the scoring system used by the risk assessors (i.e., on the number of points of the scale and on the method used for combining the component scores), it is important to assess and compare the performance of different scoring systems. In this article, we proposed a new method for assessing scoring systems. Its principle is to simulate virtual data using a stochastic model and, then, to estimate sensitivity and specificity values from these data for different scoring systems. The interest of our approach was illustrated in a case study where several scoring systems were compared. Data for this analysis were generated using a probabilistic model describing the pest introduction process. The generated data were then used to simulate the outcome of scoring systems and to assess the accuracy of the decisions about positive and negative introduction. The results showed that ordinal scales with at most 5 or 6 points were sufficient and that the multiplication-based scoring systems performed better than their sum-based counterparts. The proposed method could be used in the future to assess a great diversity of scoring systems.
Resumo:
Background: The objective of this study was to scrutinize number line estimation behaviors displayed by children in mathematics classrooms during the first three years of schooling. We extend existing research by not only mapping potential logarithmic-linear shifts but also provide a new perspective by studying in detail the estimation strategies of individual target digits within a number range familiar to children. Methods: Typically developing children (n = 67) from Years 1 – 3 completed a number-to-position numerical estimation task (0-20 number line). Estimation behaviors were first analyzed via logarithmic and linear regression modeling. Subsequently, using an analysis of variance we compared the estimation accuracy of each digit, thus identifying target digits that were estimated with the assistance of arithmetic strategy. Results: Our results further confirm a developmental logarithmic-linear shift when utilizing regression modeling; however, uniquely we have identified that children employ variable strategies when completing numerical estimation, with levels of strategy advancing with development. Conclusion: In terms of the existing cognitive research, this strategy factor highlights the limitations of any regression modeling approach, or alternatively, it could underpin the developmental time course of the logarithmic-linear shift. Future studies need to systematically investigate this relationship and also consider the implications for educational practice.
Resumo:
We derive an explicit method of computing the composition step in Cantor’s algorithm for group operations on Jacobians of hyperelliptic curves. Our technique is inspired by the geometric description of the group law and applies to hyperelliptic curves of arbitrary genus. While Cantor’s general composition involves arithmetic in the polynomial ring F_q[x], the algorithm we propose solves a linear system over the base field which can be written down directly from the Mumford coordinates of the group elements. We apply this method to give more efficient formulas for group operations in both affine and projective coordinates for cryptographic systems based on Jacobians of genus 2 hyperelliptic curves in general form.
Resumo:
Objective: We explore how accurately and quickly nurses can identify melodic medical equipment alarms when no mnemonics are used, when alarms may overlap, and when concurrent tasks are performed. Background: The international standard IEC 60601-1-8 (International Electrotechnical Commission, 2005) has proposed simple melodies to distinguish seven alarm sources. Previous studies with nonmedical participants reveal poor learning of melodic alarms and persistent confusions between some of them. The effects of domain expertise, concurrent tasks, and alarm overlaps are unknown. Method: Fourteen intensive care and general medical unit nurses learned the melodic alarms without mnemonics in two sessions on separate days. In the second half of Day 2 the nurses identified single alarms or pairs of alarms played in sequential, partially overlapping, or nearly completely overlapping configurations. For half the experimental blocks nurses performed a concurrent mental arithmetic task. Results: Nurses' learning was poor and was no better than the learning of nonnurses in a previous study. Nurses showed the previously noted confusions between alarms. Overlapping alarms were exceptionally difficult to identify. The concurrent task affected response time but not accuracy. Conclusion: Because of a failure of auditory stream segregation, the melodic alarms cannot be discriminated when they overlap. Directives to sequence the sounding of alarms in medical electrical equipment must be strictly adhered to, or the alarms must redesigned to support better auditory streaming. Application: Actual or potential uses of this research include the implementation of IEC 60601-1-8 alarms in medical electrical equipment.
Resumo:
Many computationally intensive scientific applications involve repetitive floating point operations other than addition and multiplication which may present a significant performance bottleneck due to the relatively large latency or low throughput involved in executing such arithmetic primitives on commod- ity processors. A promising alternative is to execute such primitives on Field Programmable Gate Array (FPGA) hardware acting as an application-specific custom co-processor in a high performance reconfig- urable computing platform. The use of FPGAs can provide advantages such as fine-grain parallelism but issues relating to code development in a hardware description language and efficient data transfer to and from the FPGA chip can present significant application development challenges. In this paper, we discuss our practical experiences in developing a selection of floating point hardware designs to be implemented using FPGAs. Our designs include some basic mathemati cal library functions which can be implemented for user defined precisions suitable for novel applications requiring non-standard floating point represen- tation. We discuss the details of our designs along with results from performance and accuracy analysis tests.
Resumo:
In this paper, we present the outcomes of a project on the exploration of the use of Field Programmable Gate Arrays(FPGAs) as co-processors for scientific computation. We designed a custom circuit for the pipelined solving of multiple tri-diagonal linear systems. The design is well suited for applications that require many independent tri diagonal system solves, such as finite difference methods for solving PDEs or applications utilising cubic spline interpolation. The selected solver algorithm was the Tri Diagonal Matrix Algorithm (TDMA or Thomas Algorithm). Our solver supports user specified precision thought the use of a custom floating point VHDL library supporting addition, subtraction, multiplication and division. The variable precision TDMA solver was tested for correctness in simulation mode. The TDMA pipeline was tested successfully in hardware using a simplified solver model. The details of implementation, the limitations, and future work are also discussed.
Resumo:
The mining environment, being complex, irregular and time varying, presents a challenging prospect for stereo vision. The objective is to produce a stereo vision sensor suited to close-range scenes consisting primarily of rocks. This sensor should be able to produce a dense depth map within real-time constraints. Speed and robustness are of foremost importance for this investigation. A number of area based matching metrics have been implemented, including the SAD, SSD, NCC, and their zero-meaned versions. The NCC and the zero meaned SAD and SSD were found to produce the disparity maps with the highest proportion of valid matches. The plain SAD and SSD were the least computationally expensive, due to all their operations taking place in integer arithmetic, however, they were extremely sensitive to radiometric distortion. Non-parametric techniques for matching, in particular, the rank and the census transform, have also been investigated. The rank and census transforms were found to be robust with respect to radiometric distortion, as well as being able to produce disparity maps with a high proportion of valid matches. An additional advantage of both the rank and the census transform is their amenability to fast hardware implementation.
Resumo:
The civil liability provisions relating to the assessment of damages for past and future economic loss have abrogated the common law principle of full compensation by imposing restrictions on the damages award, most commonly by a “three times average weekly earnings” cap. This consideration of the impact of those provisions is informed by a case study of the Supreme Court of Victoria Court of Appeal decision, Tuohey v Freemasons Hospital (Tuohey) , which addressed the construction and arithmetic operation of the Victorian cap for high income earners. While conclusions as to operation of the cap outside of Victoria can be drawn from Tuohey, a number of issues await judicial determination. These issues, which include the impact of the damages caps on the calculation of damages for economic loss in the circumstances of fluctuating income; vicissitudes; contributory negligence; claims per quod servitum amisit; and claims by dependants, are identified and potential resolutions discussed.
Resumo:
In this paper, we present the outcomes of a project on the exploration of the use of Field Programmable Gate Arrays (FPGAs) as co-processors for scientific computation. We designed a custom circuit for the pipelined solving of multiple tri-diagonal linear systems. The design is well suited for applications that require many independent tri-diagonal system solves, such as finite difference methods for solving PDEs or applications utilising cubic spline interpolation. The selected solver algorithm was the Tri-Diagonal Matrix Algorithm (TDMA or Thomas Algorithm). Our solver supports user specified precision thought the use of a custom floating point VHDL library supporting addition, subtraction, multiplication and division. The variable precision TDMA solver was tested for correctness in simulation mode. The TDMA pipeline was tested successfully in hardware using a simplified solver model. The details of implementation, the limitations, and future work are also discussed.
Resumo:
Elaborated Intrusion theory (EI theory; Kavanagh, Andrade, & May, 2005) posits two main cognitive components in craving: associative processes that lead to intrusive thoughts about the craved substance or activity, and elaborative processes supporting mental imagery of the substance or activity. We used a novel visuospatial task to test the hypothesis that visual imagery plays a key role in craving. Experiment 1 showed that spending 10 min constructing shapes from modeling clay (plasticine) reduced participants' craving for chocolate compared with spending 10 min 'letting your mind wander'. Increasing the load on verbal working memory using a mental arithmetic task (counting backwards by threes) did not reduce craving further. Experiment 2 compared effects on craving of a simpler verbal task (counting by ones) and clay modeling. Clay modeling reduced overall craving strength and strength of craving imagery, and reduced the frequency of thoughts about chocolate. The results are consistent with EI theory, showing that craving is reduced by loading the visuospatial sketchpad of working memory but not by loading the phonological loop. Clay modeling might be a useful self-help tool to help manage craving for chocolate, snacks and other foods.
Resumo:
The most powerful known primitive in public-key cryptography is undoubtedly elliptic curve pairings. Upon their introduction just over ten years ago the computation of pairings was far too slow for them to be considered a practical option. This resulted in a vast amount of research from many mathematicians and computer scientists around the globe aiming to improve this computation speed. From the use of modern results in algebraic and arithmetic geometry to the application of foundational number theory that dates back to the days of Gauss and Euler, cryptographic pairings have since experienced a great deal of improvement. As a result, what was an extremely expensive computation that took several minutes is now a high-speed operation that takes less than a millisecond. This thesis presents a range of optimisations to the state-of-the-art in cryptographic pairing computation. Both through extending prior techniques, and introducing several novel ideas of our own, our work has contributed to recordbreaking pairing implementations.