937 resultados para Computer programming.
Resumo:
This paper describes Electronic Blocks, a new robot construction element designed to allow children as young as age three to build and program robotic structures. The Electronic Blocks encapsulate input, output and logic concepts in tangible elements that young children can use to create a wide variety of physical agents. The children are able to determine the behavior of these agents by the choice of blocks and the manner in which they are connected. The Electronic Blocks allow children without any knowledge of mechanical design or computer programming to create and control physically embodied robots. They facilitate the development of technological capability by enabling children to design, construct, explore and evaluate dynamic robotics systems. A study of four and five year-old children using the Electronic Blocks has demonstrated that the interface is well suited to young children. The complexity of the implementation is hidden from the children, leaving the children free to autonomously explore the functionality of the blocks. As a consequence, children are free to move their focus beyond the technology. Instead they are free to focus on the construction process, and to work on goals related to the creation of robotic behaviors and interactions. As a resource for robot building, the blocks have proved to be effective in encouraging children to create robot structures, allowing children to design and program robot behaviors.
Resumo:
ASWEC is a joint conference of Engineers Australia and the Australian Computer Society reporting through the Engineers Australia/ACS Joint Board on Software Engineering.
Resumo:
Designed for undergraduate and postgraduate students, academic researchers and industrial practitioners, this book provides comprehensive case studies on numerical computing of industrial processes and step-by-step procedures for conducting industrial computing. It assumes minimal knowledge in numerical computing and computer programming, making it easy to read, understand and follow. Topics discussed include fundamentals of industrial computing, finite difference methods, the Wavelet-Collocation Method, the Wavelet-Galerkin Method, High Resolution Methods, and comparative studies of various methods. These are discussed using examples of carefully selected models from real processes of industrial significance. The step-by-step procedures in all these case studies can be easily applied to other industrial processes without a need for major changes and thus provide readers with useful frameworks for the applications of engineering computing in fundamental research problems and practical development scenarios.
Resumo:
This paper develops a dynamic model for cost-effective selection of sites for restoring biodiversity when habitat quality develops over time and is uncertain. A safety-first decision criterion is used for ensuring a minimum level of habitats, and this is formulated in a chance-constrained programming framework. The theoretical results show; (i) inclusion of quality growth reduces overall cost for achieving a future biodiversity target from relatively early establishment of habitats, but (ii) consideration of uncertainty in growth increases total cost and delays establishment, and (iii) cost-effective trading of habitat requires exchange rate between sites that varies over time. An empirical application to the red listed umbrella species - white-backed woodpecker - shows that the total cost of achieving habitat targets specified in the Swedish recovery plan is doubled if the target is to be achieved with high reliability, and that equilibrating price on a habitat trading market differs considerably between different quality growth combinations. © 2013 Elsevier GmbH.
Resumo:
This PhD project studied the genetic epistemology of novice programmers, and provides empirical evidence that the development of programming skills can be described using the neo-Piagetian cognitive development framework. The thesis identifies the manifestations of each of the early neo-Piagetian stages of development in the programming domain – that is: sensorimotor, preoperational and concrete operational. This research informs not only tertiary pedagogy, but teaching and learning of computer programming in any setting. It will enable educators to (a) identify the developmental stage of their students, (b) provide stage-appropriate learning resources and (c) assist students in transitioning to the next more mature stage of reasoning.
Resumo:
The problem addressed in this paper is sound, scalable, demand-driven null-dereference verification for Java programs. Our approach consists conceptually of a base analysis, plus two major extensions for enhanced precision. The base analysis is a dataflow analysis wherein we propagate formulas in the backward direction from a given dereference, and compute a necessary condition at the entry of the program for the dereference to be potentially unsafe. The extensions are motivated by the presence of certain ``difficult'' constructs in real programs, e.g., virtual calls with too many candidate targets, and library method calls, which happen to need excessive analysis time to be analyzed fully. The base analysis is hence configured to skip such a difficult construct when it is encountered by dropping all information that has been tracked so far that could potentially be affected by the construct. Our extensions are essentially more precise ways to account for the effect of these constructs on information that is being tracked, without requiring full analysis of these constructs. The first extension is a novel scheme to transmit formulas along certain kinds of def-use edges, while the second extension is based on using manually constructed backward-direction summary functions of library methods. We have implemented our approach, and applied it on a set of real-life benchmarks. The base analysis is on average able to declare about 84% of dereferences in each benchmark as safe, while the two extensions push this number up to 91%. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
提出了一种新的、能完全统一二次非球面与高次非球面的非球面度计算,且能直接得出最接近球的球心位置的计算方法面积长度法.该方法的计算内涵是两条相似曲线分别与某一固定点围成的面积应相等,且两条曲线的长度应非常接近.计算实例表明该方法计算结果准确,且易于编程,运算速度快.
Resumo:
This article investigates how to use UK probabilistic climate-change projections (UKCP09) in rigorous building energy analysis. Two office buildings (deep plan and shallow plan) are used as case studies to demonstrate the application of UKCP09. Three different methods for reducing the computational demands are explored: statistical reduction (Finkelstein-Schafer [F-S] statistics), simplification using degree-day theory and the use of metamodels. The first method, which is based on an established technique, can be used as reference because it provides the most accurate information. However, it is necessary to automatically choose weather files based on F-S statistic by using computer programming language because thousands of weather files created from UKCP09 weather generator need to be processed. A combination of the second (degree-day theory) and third method (metamodels) requires only a relatively small number of simulation runs, but still provides valuable information to further implement the uncertainty and sensitivity analyses. The article also demonstrates how grid computing can be used to speed up the calculation for many independent EnergyPlus models by harnessing the processing power of idle desktop computers. © 2011 International Building Performance Simulation Association (IBPSA).
Resumo:
本文从信息控制的角度出发将机器人语言定义为能处理某些特定的“外部设备”的计算机程序设计语言。并将机器人语言成份分为两大部分,即机器人核心语言和机器人专用语言。然后分别综述了机器人专用语言和机器人核心语言的进展情况。
Resumo:
Since the middle of 1980's, the mechanisms of transfer of training between cognitive subskills rest on the same body of declarative knowledge has been highly concerned. The dominant theory is theory of common element (Singley & Anderson, 1989) which predict that there will be little or no transfer between subskills within the same domain when knowledge is used in different ways, even though the subskills might rest on a common body of declarative knowledge. This idea is termed as "principle of use specificity of knowledge" (Anderson, 1987). Although this principle has gained some empirical evidence from different domains such as elementary geometry (Neves & Anderson, 1981) and computer programming (McKendree & Anderson, 1987), it is challenged by some research (Pennington et al., 1991; 1995) in which substantially larger amounts of transfer of training was found between substills that rest on a shared declarative knowledge but share little procedures (production rules). Pennington et al. (1995) provided evidence that this larger amounts of transfer are due to the elaboration of declarative knowledge. Our research provide a test of these two different explanation, by considering transfer between two subskills within the domain of elementary geometry and elementary algebra respectively, and the inference of learning method ("learning from examples" and "learning from declarative-text") and subject ability (high, middle, low) on the amounts of transfer. Within the domain of elementary geometry, the two subskills of generating proofs" (GP) and "explaining proofs" (EP) which are rest on the declarative knowledge of "theorems on the characters of parallelogram" share little procedures. Within the domain of elementary algebra, the two subskills of "calculation" (C) and "simplification" (S) which are rest on the declarative knowledge of "multiplication of radical" share some more procedures. The results demonstrate that: 1. Within the domain of elementary geometry, although little transfer was found between the two subskills of GP and EP within the total subjects, different results occurred when considering the factor of subject's ability. Within the high level subjects, significant positive transfer was found from EP to GP, while little transfer was found on the opposite direction (i. e. from GP to EP). Within the low level subjects, significant positive transfer was found from EP to GP, while significant negative transfer was found on the opposite direction. For the middle level subject, little transfer was found between the two subskills. 2. Within the domain of elementary algebra, significant positive transfer was found from S to C, while significant negative transfer was found on the opposite direction (i. e. from C to S), when considering the total subjects. The same pattern of transfer occurred within the middle level subjects and low level subject. Within the high level subjects, no transfer was found between the two subskills. 3. Within theses two domains, different learning methods yield little influence on transfer of training between subskills. Apparently, these results can not be attributed to either common procedures or elaboration of declarative knowledge. A kind of synthetic inspection is essential to construct a reasonable explanation of these results which should take into account the following three elements: (1) relations between the procedures of subskills; (2) elaboration of declarative knowledge; (3) elaboration of procedural knowledge. 排Excluding the factor of subject, transfer of training between subskills can be predicted and explained by analyzing the relations between the procedures of two subskills. However, when considering some certain subjects, the explanation of transfer of training between subskills must include subjects' elaboration of declarative knowledge and procedural knowledge, especially the influence of the elaboration on performing the other subskill. The fact that different learning methods yield little influence on transfer of training between subskills can be explained by the fact that these two methods did not effect the level of declarative knowledge. Protocol analysis provided evidence to support these hypothesis. From this research, we conclude that in order to expound the mechanisms of transfer of training between cognitive subskills rest on the same body of declarative knowledge, three elements must be considered synthetically which include: (1) relations between the procedures of subskills; (2) elaboration of declarative knowledge; (3) elaboration of procedural knowledge.
Resumo:
How can one represent the meaning of English sentences in a formal logical notation such that the translation of English into this logical form is simple and general? This report answers this question for a particular kind of meaning, namely quantifier scope, and for a particular part of the translation, namely the syntactic influence on the translation. Rules are presented which predict, for example, that the sentence: Everyone in this room speaks at least two languages. has the quantifier scope AE in standard predicate calculus, while the sentence: At lease two languages are spoken by everyone in this room. has the quantifier scope EA. Three different logical forms are presented, and their translation rules are examined. One of the logical forms is predicate calculus. The translation rules for it were developed by Robert May (May 19 77). The other two logical forms are Skolem form and a simple computer programming language. The translation rules for these two logical forms are new. All three sets of translation rules are shown to be general, in the sense that the same rules express the constraints that syntax imposes on certain other linguistic phenomena. For example, the rules that constrain the translation into Skolem form are shown to constrain definite np anaphora as well. A large body of carefully collected data is presented, and used to assess the empirical accuracy of each of the theories. None of the three theories is vastly superior to the others. However, the report concludes by suggesting that a combination of the two newer theories would have the greatest generality and the highest empirical accuracy.
Resumo:
A model for understanding the formation and propagation of modes in curved optical waveguides is developed. A numerical method for the calculation of curved waveguide mode profiles and propagation constants in two dimensional waveguides is developed, implemented and tested. A numerical method for the analysis of propagation of modes in three dimensional curved optical waveguides is developed, implemented and tested. A technique for the design of curved waveguides with reduced transition loss is presented. A scheme for drawing these new waveguides and ensuring that they have constant width is also provided. Claims about the waveguide design technique are substantiated through numerical simulations.