77 resultados para Computer science in education
Resumo:
Environmental processes have been modelled for decades. However. the need for integrated assessment and modeling (IAM) has,town as the extent and severity of environmental problems in the 21st Century worsens. The scale of IAM is not restricted to the global level as in climate change models, but includes local and regional models of environmental problems. This paper discusses various definitions of IAM and identifies five different types of integration that Lire needed for the effective solution of environmental problems. The future is then depicted in the form of two brief scenarios: one optimistic and one pessimistic. The current state of IAM is then briefly reviewed. The issues of complexity and validation in IAM are recognised as more complex than in traditional disciplinary approaches. Communication is identified as a central issue both internally among team members and externally with decision-makers. stakeholders and other scientists. Finally it is concluded that the process of integrated assessment and modelling is considered as important as the product for any particular project. By learning to work together and recognise the contribution of all team members and participants, it is believed that we will have a strong scientific and social basis to address the environmental problems of the 21st Century. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
We present the finite element simulations of reactive mineral carrying fluids mixing and mineralization in pore-fluid saturated hydrothermal/sedimentary basins. In particular we explore the mixing of reactive sulfide and sulfate fluids and the relevant patterns of mineralization for Load, zinc and iron minerals in the regime of temperature-gradient-driven convective flow. Since the mineralization and ore body formation may last quite a long period of time in a hydrothermal basin, it is commonly assumed that, in the geochemistry, the solutions of minerals are in an equilibrium state or near an equilibrium state. Therefore, the mineralization rate of a particular kind of mineral can be expressed as the product of the pore-fluid velocity and the equilibrium concentration of this particular kind of mineral Using the present mineralization rate of a mineral, the potential of the modern mineralization theory is illustrated by means of finite element studies related to reactive mineral-carrying fluids mixing problems in materially homogeneous and inhomogeneous porous rock basins.
Resumo:
This paper is concerned with the use of scientific visualization methods for the analysis of feedforward neural networks (NNs). Inevitably, the kinds of data associated with the design and implementation of neural networks are of very high dimensionality, presenting a major challenge for visualization. A method is described using the well-known statistical technique of principal component analysis (PCA). This is found to be an effective and useful method of visualizing the learning trajectories of many learning algorithms such as back-propagation and can also be used to provide insight into the learning process and the nature of the error surface.
Resumo:
We illustrate the flow behaviour of fluids with isotropic and anisotropic microstructure (internal length, layering with bending stiffness) by means of numerical simulations of silo discharge and flow alignment in simple shear. The Cosserat theory is used to provide an internal length in the constitutive model through bending stiffness to describe isotropic microstructure and this theory is coupled to a director theory to add specific orientation of grains to describe anisotropic microstructure. The numerical solution is based on an implicit form of the Material Point Method developed by Moresi et al. [1].
Resumo:
Shear deformation of fault gouge or other particulate materials often results in observed strain localization, or more precisely, the localization of measured deformation gradients. In conventional elastic materials the strain localization cannot take place therefore this phenomenon is attributed to special types of non-elastic constitutive behaviour. For particulate materials however the Cosserat continuum which takes care of microrotations independent of displacements is a more appropriate model. In elastic Cosserat continuum the localization in displacement gradients is possible under some combinations of the generalized Cosserat elastic moduli. The same combinations of parameters also correspond to a considerable dispersion in shear wave propagation which can be used for independent experimental verification of the proposed mechanism of apparent strain localization in fault gouge.
Resumo:
This paper is concerned to demonstrate the usefulness of the theory of Bourdieu, including the concepts of field, logics of practice and habitus, to understanding relationships between media and policy, what Fairclough has called the 'mediatization' of policy. Specifically, the paper draws upon Bourdieu's accessible account of the journalistic field as outlined in On television and journalism. The usefulness of this work is illustrated through a case study of a recent Australian science policy, The chance to change. As this policy went through various iterations and media representations, its naming and structure became more aphoristic. This is the mediatization of contemporary policy, which often results in policy as sound bite. The case study also shows the cross-field effects of this policy in education, illustrating how today educational policy can be spawned from developments in other public policy fields.
Resumo:
We introduce biomimetic in silico devices, and means for validation along with methods for testing and refining them. The devices are constructed from adaptable software components designed to map logically to biological components at multiple levels of resolution. In this report we focus on the liver; the goal is to validate components that mimic features of the lobule (the hepatic primary functional unit) and dynamic aspects of liver behavior, structure, and function. An assembly of lobule-mimetic devices represents an in silico liver. We validate against outflow profiles for sucrose administered as a bolus to isolated, perfused rat livers. Acceptable in silico profiles are experimentally indistinguishable from those of the in situ referent. This new technology is intended to provide powerful Dew tools for challenging our understanding of how biological functional units function in vivo.
Resumo:
Geospatial clustering must be designed in such a way that it takes into account the special features of geoinformation and the peculiar nature of geographical environments in order to successfully derive geospatially interesting global concentrations and localized excesses. This paper examines families of geospaital clustering recently proposed in the data mining community and identifies several features and issues especially important to geospatial clustering in data-rich environments.
Resumo:
A program can be refined either by transforming the whole program or by refining one of its components. The refinement of a component is, for the main part, independent of the remainder of the program. However, refinement of a component can depend on the context of the component for information about the variables that are in scope and what their types are. The refinement can also take advantage of additional information, such as any precondition the component can assume. The aim of this paper is to introduce a technique, which we call program window inference, to handle such contextual information during derivations in the refinement calculus. The idea is borrowed from a technique, called window inference, for handling context in theorem proving. Window inference is the primary proof paradigm of the Ergo proof editor. This tool has been extended to mechanize refinement using program window inference. (C) 1997 Elsevier Science B.V.
Resumo:
In order to separate the effects of experience from other characteristics of word frequency (e.g., orthographic distinctiveness), computer science and psychology students rated their experience with computer science technical items and nontechnical items from a wide range of word frequencies prior to being tested for recognition memory of the rated items. For nontechnical items, there was a curvilinear relationship between recognition accuracy and word frequency for both groups of students. The usual superiority of low-frequency words was demonstrated and high-frequency words were recognized least well. For technical items, a similar curvilinear relationship was evident for the psychology students, but for the computer science students, recognition accuracy was inversely related to word frequency. The ratings data showed that subjective experience rather than background word frequency was the better predictor of recognition accuracy.
Resumo:
The report was commissioned by the Department of Education, Science and Training to investigate the perceived efficacy of middle years programmes in all States and Territories in improving the quality of teaching, learning and student outcomes, especially in literacy and numeracy and for student members of particular target groups. These target groups included students from lower socio-economic communities, Aboriginal and Torres Strait Islander communities, students with a language background other than English, rural and remote students, and students struggling with the transition from middle/upper primary to the junior secondary years. The project involved large scale national and international literature reviews on Australian and international middle years approaches as well as an analysis of key literacy and numeracy teaching and learning strategies being used. In the report, there is emergent evidence of the relative efficacy of a combination of explicit state policy, dedicated funding and curriculum and professional development frameworks that are focused on the improvement of classroom pedagogy in the middle years. The programs that evidenced the greatest current and potential value for target group students tended to have developed in state policy environments that encouraged a structural rather than adjunct approach to middle years innovations. The authors conclude that in order to translate the gains made into sustainable improvement of educational results in literacy and numeracy for target groups, there is a need for a second generation of middle years theorising, research, development and practice.
Resumo:
Much faith has been put in the increased supply of education as a means to promote national economic development and as a way to assist the poor and the disadvantaged. However, the benefits that nations can obtain by increasing the level of education of their workforce depends on the availability of other forms of capital to complement the use of its educated workforce in production. Generally, less developed nations are lacking in complementary capital compared to more developed ones and it is appropriate for less developed countries to spend relatively less on education. The contribution of education to economic growth depends on a nation’s stage of economic development. It is only when a nation becomes relatively developed that education becomes a major contributor to economic growth. It is possible for less developed nations to retard their economic growth by favouring investment in educational capital rather than other forms of capital. Easy access to education is often portrayed as a powerful force for assisting the poor and the disadvantaged. Several reasons are given here as to why it may not be so effective in assisting the poor and in promoting greater income equality even though the aim is a worthy one. Also, an economic argument is presented in favour of special education for the physically and mentally handicapped. This paper is not intended to belittle the contribution of education to economic development nor to devalue the ideal of making basic education available to all. Instead, it is intended as an antidote to inflated claims about the ability of greater investment in education to promote economic growth and about the ability of more widespread access to education to reduce poverty and decrease income inequality.
Resumo:
An important feature of some conceptual modelling grammars is the features they provide to allow database designers to show real-world things may or may not possess a particular attribute or relationship. In the entity-relationship model, for example, the fact that a thing may not possess an attribute can be represented by using a special symbol to indicate that the attribute is optional. Similarly, the fact that a thing may or may not be involved in a relationship can be represented by showing the minimum cardinality of the relationship as zero. Whether these practices should be followed, however, is a contentious issue. An alternative approach is to eliminate optional attributes and relationships from conceptual schema diagrams by using subtypes that have only mandatory attributes and relationships. In this paper, we first present a theory that led us to predict that optional attributes and relationships should be used in conceptual schema diagrams only when users of the diagrams require a surface-level understanding of the domain being represented by the diagrams. When users require a deep-level understanding, however, optional attributes and relationships should not be used because they undermine users' abilities to grasp important domain semantics. We describe three experiments which we then undertook to test our predictions. The results of the experiments support our predictions.
Resumo:
In this work, we present a systematic approach to the representation of modelling assumptions. Modelling assumptions form the fundamental basis for the mathematical description of a process system. These assumptions can be translated into either additional mathematical relationships or constraints between model variables, equations, balance volumes or parameters. In order to analyse the effect of modelling assumptions in a formal, rigorous way, a syntax of modelling assumptions has been defined. The smallest indivisible syntactical element, the so called assumption atom has been identified as a triplet. With this syntax a modelling assumption can be described as an elementary assumption, i.e. an assumption consisting of only an assumption atom or a composite assumption consisting of a conjunction of elementary assumptions. The above syntax of modelling assumptions enables us to represent modelling assumptions as transformations acting on the set of model equations. The notion of syntactical correctness and semantical consistency of sets of modelling assumptions is defined and necessary conditions for checking them are given. These transformations can be used in several ways and their implications can be analysed by formal methods. The modelling assumptions define model hierarchies. That is, a series of model families each belonging to a particular equivalence class. These model equivalence classes can be related to primal assumptions regarding the definition of mass, energy and momentum balance volumes and to secondary and tiertinary assumptions regarding the presence or absence and the form of mechanisms within the system. Within equivalence classes, there are many model members, these being related to algebraic model transformations for the particular model. We show how these model hierarchies are driven by the underlying assumption structure and indicate some implications on system dynamics and complexity issues. (C) 2001 Elsevier Science Ltd. All rights reserved.