943 resultados para arithmetic
Resumo:
This paper describes the implementation of a TMR (Triple Modular Redundant) microprocessor system on a FPGA. The system exhibits true redundancy in that three instances of the same processor system (both software and hardware) are executed in parallel. The described system uses software to control external peripherals and a voter is used to output correct results. An error indication is asserted whenever two of the three outputs match or all three outputs disagree. The software has been implemented to conform to a particular safety critical coding guideline/standard which is popular in industry. The system was verified by injecting various faults into it.
Resumo:
Este estudo investiga evidências da influência de uma intervenção psicomotora lúdica na construção do pensamento operatório concreto e desempenho neuromotor de alunos com dificuldade de aprendizagem da 2ª série do ensino fundamental, de escola pública do Estado de São Paulo. Utiliza-se de método experimental. Manipula a intervenção psicomotora lúdica, (Variável Independente-VI), com o objetivo de verificar a sua possível influência no desempenho cognitivo relativo a noções de Conservação, Classificação, Seriação e Aritmética, assim como no neuromotor relativo a Agilidade e Orientação Direita-Esquerda (Variáveis Dependentes - VDs) - totalizando 6 VDs. A amostra compõe-se por 18 escolares, na faixa etária de 7 a 11 anos, de ambos os sexos, organizados em dois grupos: experimental (N=9) e controle (N=9). O procedimento experimental desenvolve-se em 16 sessões, constando de três etapas. Aos dois grupos são aplicados, individualmente, pré-teste (1ª etapa) e pós-teste (3ª etapa), constando de duas sessões individuais em cada etapa, com a utilização dos seguintes instrumentos: Provas Operatórias de Piaget, teste Piaget-Head de Orientação Direita-Esquerda e subteste de Aritmética do teste de Desempenho Escolar de Stein, teste de Shuttle Run . A 2ª etapa, exclusiva do grupo experimental, consta da intervenção psicomotora lúdica, em 12 sessões grupais de 50 minutos cada. Utilizou-se da prova de Wilcoxon, para comparação dos resultados entre os grupos. Os resultados referentes às noções de Classificação ( p=0,010), Seriação (p=0,034), Aritmética (p=0,157) e Orientação Direita- Esquerda (p=0,007), indicam que ocorreu uma diferença significativa estatisticamente, considerando-se que os participantes apresentaram desempenho superior nos pós-testes. Nas demais provas, foi observado melhor desempenho nos dois grupos, o que, portanto, não pode ser atribuído à intervenção. Conclui-se que o objetivo do trabalho foi atingido, visto que o programa mostrou-se eficiente para desenvolver o pensamento operatório e o neuromotor em relação a Orientação Direita-Esquerda dos participantes.(AU)
Resumo:
Dislexia é uma condição neurológica associada a deficiências na aquisição e processamento da linguagem. Variando em graus de gravidade, que se manifesta por dificuldades na linguagem receptiva e expressiva, incluindo processamento fonológico, na leitura, escrita, ortografia, caligrafia, e por vezes em aritmética. Dislexia é uma condição hereditária associada a diversas anormalidades neurológicas em áreas corticais visuais e auditivas. Uma das mais influentes teorias para explicar os sintomas disléxicos é a chamada hipótese magnocelular. Segundo esta hipótese, a dislexia resulta de processamento de informações visuais anormais, devido principalmente a disfunção no sistema magnocelular. Esta dissertação explora esta hipótese comparando quinze indivíduos com dislexia e quinze controles, com idades compreendidas entre os 18 e os 30 anos através de dois testes visuais de atenção. Ambos os experimentos avaliam tempo de reação a estímulos que apareciam em toda tela do computador, enquanto os indivíduos permaneciam instalados, com a cabeça apoiada por um chin rest e com os olhos fixos em um alvo central. O experimento I consistiu de estímulos (pequenos círculos) brancos apresentados em um fundo preto. No experimento II, a mesma metodologia foi utilizada, mas agora com os estímulos (pequenos círculos) verdes sobre um fundo vermelho. Os resultados foram analisados levando em consideração os quadrantes onde os estímulos foram apresentados. Pacientes e controles não diferiram em relação ao tempo de reação a estímulos apresentados no campo visual inferior, em comparação ao quadrante superior de um mesmo indivíduo. Considerando todos os quadrantes, disléxicos tiveram tempo de reação mais lento no experimento I, mas apresentaram tempos de reação semelhantes aos controles no experimento II. Estes resultados são compatíveis com anormalidades no sistema magnocelular. As implicações destes achados para a fisiopatologia da dislexia, bem como para o seu tratamento devem ser mais discutidos.(AU)
Resumo:
Assuming no prior mathematical knowledge, this approachable and straightforward text covers the essential mathematical skills needed by business and management students at undergraduate and MBA level. Clare Morris uses a clear and informal narrative style with examples, painlessly leading the reader through fundamental mathematical principles. "Essential Maths" spans basic arithmetic and algebra, and introduces readers to calculus and simple statistics, employing exercises and activities throughout to enable students to check their learning and reflect on their progress and understanding. Also available is a companion website with extra features to accompany the text, please take a look on http://www.palgrave.com/business/morris/index.html
Resumo:
We present an implementation of the domain-theoretic Picard method for solving initial value problems (IVPs) introduced by Edalat and Pattinson [1]. Compared to Edalat and Pattinson's implementation, our algorithm uses a more efficient arithmetic based on an arbitrary precision floating-point library. Despite the additional overestimations due to floating-point rounding, we obtain a similar bound on the convergence rate of the produced approximations. Moreover, our convergence analysis is detailed enough to allow a static optimisation in the growth of the precision used in successive Picard iterations. Such optimisation greatly improves the efficiency of the solving process. Although a similar optimisation could be performed dynamically without our analysis, a static one gives us a significant advantage: we are able to predict the time it will take the solver to obtain an approximation of a certain (arbitrarily high) quality.
Resumo:
In this thesis we present an approach to automated verification of floating point programs. Existing techniques for automated generation of correctness theorems are extended to produce proof obligations for accuracy guarantees and absence of floating point exceptions. A prototype automated real number theorem prover is presented, demonstrating a novel application of function interval arithmetic in the context of subdivision-based numerical theorem proving. The prototype is tested on correctness theorems for two simple yet nontrivial programs, proving exception freedom and tight accuracy guarantees automatically. The prover demonstrates a novel application of function interval arithmetic in the context of subdivision-based numerical theorem proving. The experiments show how function intervals can be used to combat the information loss problems that limit the applicability of traditional interval arithmetic in the context of hard real number theorem proving.
Resumo:
This report presents and evaluates a novel idea for scalable lossy colour image coding with Matching Pursuit (MP) performed in a transform domain. The benefits of the idea of MP performed in the transform domain are analysed in detail. The main contribution of this work is extending MP with wavelets to colour coding and proposing a coding method. We exploit correlations between image subbands after wavelet transformation in RGB colour space. Then, a new and simple quantisation and coding scheme of colour MP decomposition based on Run Length Encoding (RLE), inspired by the idea of coding indexes in relational databases, is applied. As a final coding step arithmetic coding is used assuming uniform distributions of MP atom parameters. The target application is compression at low and medium bit-rates. Coding performance is compared to JPEG 2000 showing the potential to outperform the latter with more sophisticated than uniform data models for arithmetic coder. The results are presented for grayscale and colour coding of 12 standard test images.
Resumo:
This investigation sought to explore the nature and extent of school mathematical difficulties among the dyslexic population. Anecdotal reports have suggested that many dyslexics may have difficulties in arithmetic, but few systematic studies have previously been undertaken. The literature pertaining to dyslexia and school mathematics respectively is reviewed. Clues are sought in studies of dyscalculia. These seem inadequate in accounting for dyslexics' reported mathematical difficulties. Similarities between aspects of language and mathematics are examined for underlying commonalities that may partially account for concomitant problems in mathematics, in individuals with a written language dysfunction. The performance of children taught using different mathematics work-schemes is assessed to ascertain if these are associated with differential levels of achievement that may be reflected in the dyslexic population few are found. Findings from studies designed to assess the relationship between written language failure and achievement in mathematics are reported. Study 1 reveals large correlational differences between subtest scores (Wechsler Intelligence Scale for Children, Wechsler, 1976) and three mathematics tests, for young dyslexics and children without literacy difficulties. However, few differences are found between levels of attainment, at this age (6 ½ - 9 years). Further studies indicate that, for dyslexics, achievement in school mathematics, may be independent of measured intelligence, as is the case with their literacy skills. Studies 3 and 4 reveal that dyslexics' performances on a range of school mathematical topics gets relatively worse compared with that of Controls (age range 8 - 17 years), as they get older. Extensive item analyses reveal many errors relating strongly to known deficits in the dyslexics' learning style - poor short-term memory, sequencing skills and verbal labelling strategies. Subgroups of dyslexics are identified on the basis of mathematical performance. Tentative explanations, involving alternative neuropsychological approaches, are offered for the measured differences in attainment between these groups.
Resumo:
Illiteracy is often associated with people in developing countries. However, an estimated 50 % of adults in a developed country such as Canada lack the literacy skills required to cope with the challenges of today's society; for them, tasks such as reading, understanding, basic arithmetic, and using everyday items are a challenge. Many community-based organizations offer resources and support for these adults, yet overall functional literacy rates are not improving. This is due to a wide range of factors, such as poor retention of adult learners in literacy programs, obstacles in transferring the acquired skills from the classroom to the real life, personal attitudes toward learning, and the stigma of functional illiteracy. In our research we examined the opportunities afforded by personal mobile devices in providing learning and functional support to low-literacy adults. We present the findings of an exploratory study aimed at investigating the reception and adoption of a technological solution for adult learners. ALEX© is a mobile application designed for use both in the classroom and in daily life in order to help low-literacy adults become increasingly literate and independent. Such a solution complements literacy programs by increasing users' motivation and interest in learning, and raising their confidence levels both in their education pursuits and in facing the challenges of their daily lives. We also reflect on the challenges we faced in designing and conducting our research with two user groups (adults enrolled in literacy classes and in an essential skills program) and contrast the educational impact and attitudes toward such technology between these. Our conclusions present the lessons learned from our evaluations and the impact of the studies' specific challenges on the outcome and uptake of such mobile assistive technologies in providing practical support to low-literacy adults in conjunction with literacy and essential skills training. © 2013 Her Majesty the Queen in Right of Canada.
Resumo:
Data envelopment analysis (DEA) is a methodology for measuring the relative efficiencies of a set of decision making units (DMUs) that use multiple inputs to produce multiple outputs. Crisp input and output data are fundamentally indispensable in conventional DEA. However, the observed values of the input and output data in real-world problems are sometimes imprecise or vague. Many researchers have proposed various fuzzy methods for dealing with the imprecise and ambiguous data in DEA. This chapter provides a taxonomy and review of the fuzzy DEA (FDEA) methods. We present a classification scheme with six categories, namely, the tolerance approach, the α-level based approach, the fuzzy ranking approach, the possibility approach, the fuzzy arithmetic, and the fuzzy random/type-2 fuzzy set. We discuss each classification scheme and group the FDEA papers published in the literature over the past 30 years. © 2014 Springer-Verlag Berlin Heidelberg.
Resumo:
The focus of our work is the verification of tight functional properties of numerical programs, such as showing that a floating-point implementation of Riemann integration computes a close approximation of the exact integral. Programmers and engineers writing such programs will benefit from verification tools that support an expressive specification language and that are highly automated. Our work provides a new method for verification of numerical software, supporting a substantially more expressive language for specifications than other publicly available automated tools. The additional expressivity in the specification language is provided by two constructs. First, the specification can feature inclusions between interval arithmetic expressions. Second, the integral operator from classical analysis can be used in the specifications, where the integration bounds can be arbitrary expressions over real variables. To support our claim of expressivity, we outline the verification of four example programs, including the integration example mentioned earlier. A key component of our method is an algorithm for proving numerical theorems. This algorithm is based on automatic polynomial approximation of non-linear real and real-interval functions defined by expressions. The PolyPaver tool is our implementation of the algorithm and its source code is publicly available. In this paper we report on experiments using PolyPaver that indicate that the additional expressivity does not come at a performance cost when comparing with other publicly available state-of-the-art provers. We also include a scalability study that explores the limits of PolyPaver in proving tight functional specifications of progressively larger randomly generated programs. © 2014 Springer International Publishing Switzerland.
Resumo:
Purpose. Whereas many previous studies have identified the association between sustained near work and myopia, few have assessed the influence of concomitant levels of cognitive effort. This study investigates the effect of cognitive effort on near-work induced transient myopia (NITM). Methods. Subjects comprised of six early onset myopes (EOM; mean age 23.7 yrs; mean onset 10.8 yrs), six late-onset myopes (LOM; mean age 23.2 yrs; mean onset 20.0 yrs) and six emmetropes (EMM; mean age 23.8 yrs). Dynamic, monocular, ocular accommodation was measured with the Shin-Nippon SRW-5000 autorefractor. Subjects engaged passively or actively in a 5 minute arithmetic sum checking task presented monocularly on an LCD monitor via a Badal optical system. In all conditions the task was initially located at near (4.50 D) and immediately following the task instantaneously changed to far (0.00 D) for a further 5 minutes. The combinations of active (A) and passive (P) cognition were randomly allocated as P:P; A:P; A:A; P:A. Results. For the initial near task, LOMs were shown to have a significantly less accurate accommodative response than either EOMs or EMMs (p < 0.001). For the far task, post hoc analyses for refraction identified EOMs as demonstrating significant NITM compared to LOMs (p < 0.05), who in turn showed greater NITM than EMMs (p < 0.001). The data show that for EOMs the level of cognitive activity operating during the near and far tasks determines the persistence of NITM; persistence being maximal when active cognition at near is followed by passive cognition at far. Conclusions. Compared with EMMs, EOMs and LOMs are particularly susceptible to NITM such that sustained near vision reduces subsequent accommodative accuracy for far vision. It is speculated that the marked NITM found in EOM may be a consequence of the crystalline lens thinning shown to be a developmental feature of EOM. Whereas the role of small amounts of retinal defocus in myopigenesis remains equivocal, the results show that account needs to be taken of cognitive demand in assessing phenomena such as NITM.
Resumo:
We present a design of a fast all-optical core-node processor that performs packet-forwarding in optical networks without header-modification. The design is based on bit-serial architecture using TOADs as logic-gates that perform modulo-arithmetic to forward packets.
Resumo:
The paper has been presented at the 12th International Conference on Applications of Computer Algebra, Varna, Bulgaria, June, 2006