35 resultados para computer system emulation, multiprocessors, educational computer systems
em CentAUR: Central Archive University of Reading - UK
Resumo:
The notion that learning can be enhanced when a teaching approach matches a learner’s learning style has been widely accepted in classroom settings since the latter represents a predictor of student’s attitude and preferences. As such, the traditional approach of ‘one-size-fits-all’ as may be applied to teaching delivery in Educational Hypermedia Systems (EHSs) has to be changed with an approach that responds to users’ needs by exploiting their individual differences. However, establishing and implementing reliable approaches for matching the teaching delivery and modalities to learning styles still represents an innovation challenge which has to be tackled. In this paper, seventy six studies are objectively analysed for several goals. In order to reveal the value of integrating learning styles in EHSs, different perspectives in this context are discussed. Identifying the most effective learning style models as incorporated within AEHSs. Investigating the effectiveness of different approaches for modelling students’ individual learning traits is another goal of this study. Thus, the paper highlights a number of theoretical and technical issues of LS-BAEHSs to serve as a comprehensive guidance for researchers who interest in this area.
Resumo:
This paper is concerned with the uniformization of a system of afine recurrence equations. This transformation is used in the design (or compilation) of highly parallel embedded systems (VLSI systolic arrays, signal processing filters, etc.). In this paper, we present and implement an automatic system to achieve uniformization of systems of afine recurrence equations. We unify the results from many earlier papers, develop some theoretical extensions, and then propose effective uniformization algorithms. Our results can be used in any high level synthesis tool based on polyhedral representation of nested loop computations.
Resumo:
This article describes an application of computers to a consumer-based production engineering environment. Particular consideration is given to the utilisation of low-cost computer systems for the visual inspection of components on a production line in real time. The process of installation is discussed, from identifying the need for artificial vision and justifying the cost, through to choosing a particular system and designing the physical and program structure.
Resumo:
Relating system dynamics to the broad systems movement, the key notion is that reinforcing loops deserve no less attention than balancing loops. Three specific propositions follow. First, since reinforcing loops arise in surprising places, investigations of complex systems must consider their possible existence and potential impact. Second, because the strength of reinforcing loops can be misinferred - we include an example from the field of servomechanisms - computer simulation can be essential. Be it project management, corporate growth or inventory oscillation, simulation helps to assess consequences of reinforcing loops and options for interventions. Third, in social systems the consequences of reinforcing loops are not inevitable. Examples concerning globalization illustrate how difficult it might be to challenge such assumptions. However, system dynamics and ideas from contemporary social theory help to show that even the most complex social systems are, in principle, subject to human influence. In conclusion, by employing these ideas, by attending to reinforcing as well as balancing loops, system dynamics work can improve the understanding of social systems and illuminate our choices when attempting to steer them.
Resumo:
Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them also involves complicated workflows implemented as shell scripts. A new grid middleware system that is well suited to climate modelling applications is presented in this paper. Grid Remote Execution (G-Rex) allows climate models to be deployed as Web services on remote computer systems and then launched and controlled as if they were running on the user's own computer. Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model. G-Rex has a REST architectural style, featuring a Java client program that can easily be incorporated into existing scientific workflow scripts. Some technical details of G-Rex are presented, with examples of its use by climate modellers.
Resumo:
G-Rex is light-weight Java middleware that allows scientific applications deployed on remote computer systems to be launched and controlled as if they are running on the user's own computer. G-Rex is particularly suited to ocean and climate modelling applications because output from the model is transferred back to the user while the run is in progress, which prevents the accumulation of large amounts of data on the remote cluster. The G-Rex server is a RESTful Web application that runs inside a servlet container on the remote system, and the client component is a Java command line program that can easily be incorporated into existing scientific work-flow scripts. The NEMO and POLCOMS ocean models have been deployed as G-Rex services in the NERC Cluster Grid, and G-Rex is the core grid middleware in the GCEP and GCOMS e-science projects.
Resumo:
Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them usually involves complicated workflows implemented as shell scripts. For example, NEMO (Smith et al. 2008) is a state-of-the-art ocean model that is used currently for operational ocean forecasting in France, and will soon be used in the UK for both ocean forecasting and climate modelling. On a typical modern cluster, a particular one year global ocean simulation at 1-degree resolution takes about three hours when running on 40 processors, and produces roughly 20 GB of output as 50000 separate files. 50-year simulations are common, during which the model is resubmitted as a new job after each year. Running NEMO relies on a set of complicated shell scripts and command utilities for data pre-processing and post-processing prior to job resubmission. Grid Remote Execution (G-Rex) is a pure Java grid middleware system that allows scientific applications to be deployed as Web services on remote computer systems, and then launched and controlled as if they are running on the user's own computer. Although G-Rex is general purpose middleware it has two key features that make it particularly suitable for remote execution of climate models: (1) Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model; (2) The client component is a command-line program that can easily be incorporated into existing model work-flow scripts. G-Rex has a REST (Fielding, 2000) architectural style, which allows client programs to be very simple and lightweight and allows users to interact with model runs using only a basic HTTP client (such as a Web browser or the curl utility) if they wish. This design also allows for new client interfaces to be developed in other programming languages with relatively little effort. The G-Rex server is a standard Web application that runs inside a servlet container such as Apache Tomcat and is therefore easy to install and maintain by system administrators. G-Rex is employed as the middleware for the NERC1 Cluster Grid, a small grid of HPC2 clusters belonging to collaborating NERC research institutes. Currently the NEMO (Smith et al. 2008) and POLCOMS (Holt et al, 2008) ocean models are installed, and there are plans to install the Hadley Centre’s HadCM3 model for use in the decadal climate prediction project GCEP (Haines et al., 2008). The science projects involving NEMO on the Grid have a particular focus on data assimilation (Smith et al. 2008), a technique that involves constraining model simulations with observations. The POLCOMS model will play an important part in the GCOMS project (Holt et al, 2008), which aims to simulate the world’s coastal oceans. A typical use of G-Rex by a scientist to run a climate model on the NERC Cluster Grid proceeds as follows :(1) The scientist prepares input files on his or her local machine. (2) Using information provided by the Grid’s Ganglia3 monitoring system, the scientist selects an appropriate compute resource. (3) The scientist runs the relevant workflow script on his or her local machine. This is unmodified except that calls to run the model (e.g. with “mpirun”) are simply replaced with calls to "GRexRun" (4) The G-Rex middleware automatically handles the uploading of input files to the remote resource, and the downloading of output files back to the user, including their deletion from the remote system, during the run. (5) The scientist monitors the output files, using familiar analysis and visualization tools on his or her own local machine. G-Rex is well suited to climate modelling because it addresses many of the middleware usability issues that have led to limited uptake of grid computing by climate scientists. It is a lightweight, low-impact and easy-to-install solution that is currently designed for use in relatively small grids such as the NERC Cluster Grid. A current topic of research is the use of G-Rex as an easy-to-use front-end to larger-scale Grid resources such as the UK National Grid service.
Resumo:
Monitoring Earth's terrestrial water conditions is critically important to many hydrological applications such as global food production; assessing water resources sustainability; and flood, drought, and climate change prediction. These needs have motivated the development of pilot monitoring and prediction systems for terrestrial hydrologic and vegetative states, but to date only at the rather coarse spatial resolutions (∼10–100 km) over continental to global domains. Adequately addressing critical water cycle science questions and applications requires systems that are implemented globally at much higher resolutions, on the order of 1 km, resolutions referred to as hyperresolution in the context of global land surface models. This opinion paper sets forth the needs and benefits for a system that would monitor and predict the Earth's terrestrial water, energy, and biogeochemical cycles. We discuss six major challenges in developing a system: improved representation of surface‐subsurface interactions due to fine‐scale topography and vegetation; improved representation of land‐atmospheric interactions and resulting spatial information on soil moisture and evapotranspiration; inclusion of water quality as part of the biogeochemical cycle; representation of human impacts from water management; utilizing massively parallel computer systems and recent computational advances in solving hyperresolution models that will have up to 109 unknowns; and developing the required in situ and remote sensing global data sets. We deem the development of a global hyperresolution model for monitoring the terrestrial water, energy, and biogeochemical cycles a “grand challenge” to the community, and we call upon the international hydrologic community and the hydrological science support infrastructure to endorse the effort.
Resumo:
Aim: To determine the prevalence and nature of prescribing errors in general practice; to explore the causes, and to identify defences against error. Methods: 1) Systematic reviews; 2) Retrospective review of unique medication items prescribed over a 12 month period to a 2% sample of patients from 15 general practices in England; 3) Interviews with 34 prescribers regarding 70 potential errors; 15 root cause analyses, and six focus groups involving 46 primary health care team members Results: The study involved examination of 6,048 unique prescription items for 1,777 patients. Prescribing or monitoring errors were detected for one in eight patients, involving around one in 20 of all prescription items. The vast majority of the errors were of mild to moderate severity, with one in 550 items being associated with a severe error. The following factors were associated with increased risk of prescribing or monitoring errors: male gender, age less than 15 years or greater than 64 years, number of unique medication items prescribed, and being prescribed preparations in the following therapeutic areas: cardiovascular, infections, malignant disease and immunosuppression, musculoskeletal, eye, ENT and skin. Prescribing or monitoring errors were not associated with the grade of GP or whether prescriptions were issued as acute or repeat items. A wide range of underlying causes of error were identified relating to the prescriber, patient, the team, the working environment, the task, the computer system and the primary/secondary care interface. Many defences against error were also identified, including strategies employed by individual prescribers and primary care teams, and making best use of health information technology. Conclusion: Prescribing errors in general practices are common, although severe errors are unusual. Many factors increase the risk of error. Strategies for reducing the prevalence of error should focus on GP training, continuing professional development for GPs, clinical governance, effective use of clinical computer systems, and improving safety systems within general practices and at the interface with secondary care.
Resumo:
The P-found protein folding and unfolding simulation repository is designed to allow scientists to perform data mining and other analyses across large, distributed simulation data sets. There are two storage components in P-found: a primary repository of simulation data that is used to populate the second component, and a data warehouse that contains important molecular properties. These properties may be used for data mining studies. Here we demonstrate how grid technologies can support multiple, distributed P-found installations. In particular, we look at two aspects: firstly, how grid data management technologies can be used to access the distributed data warehouses; and secondly, how the grid can be used to transfer analysis programs to the primary repositories — this is an important and challenging aspect of P-found, due to the large data volumes involved and the desire of scientists to maintain control of their own data. The grid technologies we are developing with the P-found system will allow new large data sets of protein folding simulations to be accessed and analysed in novel ways, with significant potential for enabling scientific discovery.
Resumo:
This paper considers left-invariant control systems defined on the orthonormal frame bundles of simply connected manifolds of constant sectional curvature, namely the space forms Euclidean space E-3, the sphere S-3 and Hyperboloid H-3 with the corresponding frame bundles equal to the Euclidean group of motions SE(3), the rotation group SO(4) and the Lorentz group SO(1, 3). Orthonormal frame bundles of space forms coincide with their isometry groups and therefore the focus shifts to left-invariant control systems defined on Lie groups. In this paper a method for integrating these systems is given where the controls are time-independent. In the Euclidean case the elements of the Lie algebra se(3) are often referred to as twists. For constant twist motions, the corresponding curves g(t) is an element of SE(3) are known as screw motions, given in closed form by using the well known Rodrigues' formula. However, this formula is only applicable to the Euclidean case. This paper gives a method for computing the non-Euclidean screw motions in closed form. This involves decoupling the system into two lower dimensional systems using the double cover properties of Lie groups, then the lower dimensional systems are solved explicitly in closed form.
Resumo:
The purpose of this study is to analyse current data continuity mechanisms employed by the target group of businesses and to identify any inadequacies in the mechanisms as a whole. The questionnaire responses indicate that 47% of respondents do perceive backup methodologies as important, with a total of 70% of respondents having some backup methodology already in place. Businesses in Moulton Park perceive the loss of data to have a significant effect upon their business’ ability to function. Only 14% of respondents indicated that loss of data on computer systems would not affect their business at all, with 54% of respondents indicating that there would be either a “major effect” (or greater) on their ability to operate. Respondents that have experienced data loss were more likely to have backup methodologies in place (53%) than respondents that had not experienced data loss (18%). Although the number of respondents clearly affected the quality and conclusiveness of the results returned, the level of backup methodologies in place appears to be proportional to the company size. Further investigation is recommended into the subject in order to validate the information gleaned from the small number of respondents.
Resumo:
Many scientific and engineering applications involve inverting large matrices or solving systems of linear algebraic equations. Solving these problems with proven algorithms for direct methods can take very long to compute, as they depend on the size of the matrix. The computational complexity of the stochastic Monte Carlo methods depends only on the number of chains and the length of those chains. The computing power needed by inherently parallel Monte Carlo methods can be satisfied very efficiently by distributed computing technologies such as Grid computing. In this paper we show how a load balanced Monte Carlo method for computing the inverse of a dense matrix can be constructed, show how the method can be implemented on the Grid, and demonstrate how efficiently the method scales on multiple processors. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
The paper reviews the leading diagramming methods employed in system dynamics to communicate the contents of models. The main ideas and historical development of the field are first outlined. Two diagramming methods—causal loop diagrams (CLDs) and stock/flow diagrams (SFDs)—are then described and their advantages and limitations discussed. A set of broad research directions is then outlined. These concern: the abilities of different diagrams to communicate different ideas, the role that diagrams have in group model building, and the question of whether diagrams can be an adequate substitute for simulation modelling. The paper closes by suggesting that although diagrams alone are insufficient, they have many benefits. However, since these benefits have emerged only as ‘craft wisdom’, a more rigorous programme of research into the diagrams' respective attributes is called for.