963 resultados para Complex Engineering Systems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Agent-oriented software engineering and software product lines are two promising software engineering techniques. Recent research work has been exploring their integration, namely multi-agent systems product lines (MAS-PLs), to promote reuse and variability management in the context of complex software systems. However, current product derivation approaches do not provide specific mechanisms to deal with MAS-PLs. This is essential because they typically encompass several concerns (e.g., trust, coordination, transaction, state persistence) that are constructed on the basis of heterogeneous technologies (e.g., object-oriented frameworks and platforms). In this paper, we propose the use of multi-level models to support the configuration knowledge specification and automatic product derivation of MAS-PLs. Our approach provides an agent-specific architecture model that uses abstractions and instantiation rules that are relevant to this application domain. In order to evaluate the feasibility and effectiveness of the proposed approach, we have implemented it as an extension of an existing product derivation tool, called GenArch. The approach has also been evaluated through the automatic instantiation of two MAS-PLs, demonstrating its potential and benefits to product derivation and configuration knowledge specification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of standard reference electrodes, such as Ag/AgCl or saturated calomel electrodes, in potentiometric and amperometric studies involving miniaturized electrochemical systems, or those operating under positive hydraulic pressure, is often impractical. Placement of the reference electrode in the direct vicinity of the working electrode is often prohibited by the dimensions or layout of the electrochemical cell, while the alternative strategy of locating the reference electrode in a separate compartment often leads to electrolyte leakage and contamination of the system. In the present study, we have investigated the functionality of a pseudoreference electrode comprising a platinum wire, one end of which was maintained in intimate contact with the internal solution of an Ag/AgCl reference electrode while the other was connected, via a BNC connector, to a platinum probe located within the electrochemical cell. Linear and cyclic voltammetric studies, involving both aqueous and nonaqueous electrolytes, were carried out using the pseudoreference electrode and an electrochemical cup-type cell with three electrodes or an electrochemical flow reactor. In all cases, the functionality of the Pt//Ag/AgCl system was similar to that of a conventional Ag/AgCl reference electrode. Variations in the electrolyte did not alter the potential or voltammetric profile recorded when using the pseudoreference system, although peak currents were generally improved and potential values shifted by approximately +350 mV in comparison with the Ag/AgCl electrode, therefore, the system pseudoreference can be applied in any electrochemical system due to the constant potential difference. It is concluded that the pseudoreference electrode can be used with advantage to obtain potentiometric and amperometric measurements in both simple and complex electrochemical systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Systems Biology is an innovative way of doing biology recently raised in bio-informatics contexts, characterised by the study of biological systems as complex systems with a strong focus on the system level and on the interaction dimension. In other words, the objective is to understand biological systems as a whole, putting on the foreground not only the study of the individual parts as standalone parts, but also of their interaction and of the global properties that emerge at the system level by means of the interaction among the parts. This thesis focuses on the adoption of multi-agent systems (MAS) as a suitable paradigm for Systems Biology, for developing models and simulation of complex biological systems. Multi-agent system have been recently introduced in informatics context as a suitabe paradigm for modelling and engineering complex systems. Roughly speaking, a MAS can be conceived as a set of autonomous and interacting entities, called agents, situated in some kind of nvironment, where they fruitfully interact and coordinate so as to obtain a coherent global system behaviour. The claim of this work is that the general properties of MAS make them an effective approach for modelling and building simulations of complex biological systems, following the methodological principles identified by Systems Biology. In particular, the thesis focuses on cell populations as biological systems. In order to support the claim, the thesis introduces and describes (i) a MAS-based model conceived for modelling the dynamics of systems of cells interacting inside cell environment called niches. (ii) a computational tool, developed for implementing the models and executing the simulations. The tool is meant to work as a kind of virtual laboratory, on top of which kinds of virtual experiments can be performed, characterised by the definition and execution of specific models implemented as MASs, so as to support the validation, falsification and improvement of the models through the observation and analysis of the simulations. A hematopoietic stem cell system is taken as reference case study for formulating a specific model and executing virtual experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many reverse engineering approaches have been developed to analyze software systems written in different languages like C/C++ or Java. These approaches typically rely on a meta-model, that is either specific for the language at hand or language independent (e.g. UML). However, one language that was hardly addressed is Lisp. While at first sight it can be accommodated by current language independent meta-models, Lisp has some unique features (e.g. macros, CLOS entities) that are crucial for reverse engineering Lisp systems. In this paper we propose a suite of new visualizations that reveal the special traits of the Lisp language and thus help in understanding complex Lisp systems. To validate our approach we apply them on several large Lisp case studies, and summarize our experience in terms of a series of recurring visual patterns that we have detected.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Enterprise Applications are complex software systems that manipulate much persistent data and interact with the user through a vast and complex user interface. In particular applications written for the Java 2 Platform, Enterprise Edition (J2EE) are composed using various technologies such as Enterprise Java Beans (EJB) or Java Server Pages (JSP) that in turn rely on languages other than Java, such as XML or SQL. In this heterogeneous context applying existing reverse engineering and quality assurance techniques developed for object-oriented systems is not enough. Because those techniques have been created to measure quality or provide information about one aspect of J2EE applications, they cannot properly measure the quality of the entire system. We intend to devise techniques and metrics to measure quality in J2EE applications considering all their aspects and to aid their evolution. Using software visualization we also intend to inspect to structure of J2EE applications and all other aspects that can be investigate through this technique. In order to do that we also need to create a unified meta-model including all elements composing a J2EE application.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper considers ocean fisheries as complex adaptive systems and addresses the question of how human institutions might be best matched to their structure and function. Ocean ecosystems operate at multiple scales, but the management of fisheries tends to be aimed at a single species considered at a single broad scale. The paper argues that this mismatch of ecological and management scale makes it difficult to address the fine-scale aspects of ocean ecosystems, and leads to fishing rights and strategies that tend to erode the underlying structure of populations and the system itself. A successful transition to ecosystem-based management will require institutions better able to economize on the acquisition of feedback about the impact of human activities. This is likely to be achieved by multiscale institutions whose organization mirrors the spatial organization of the ecosystem and whose communications occur through a polycentric network. Better feedback will allow the exploration of fine-scale science and the employment of fine-scale fishing restraints, better adapted to the behavior of fish and habitat. The scale and scope of individual fishing rights also needs to be congruent with the spatial structure of the ecosystem. Place-based rights can be expected to create a longer private planning horizon as well as stronger incentives for the private and public acquisition of system relevant knowledge.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The engineering careers models were diverse in Europe, and are adopting now in Spain the Bolonia process for European Universities. Separated from older Universities, that are in part technically active, Civil Engineering (Caminos, Canales y Puertos) started at end of 18th century in Spain adopting the French models of Upper Schools for state civil servants with exam at entry. After 1800 intense wars, to conserve forest regions Ingenieros de Montes appeared as Upper School, and in 1855 also the Ingenieros Agrónomos to push up related techniques and practices. Other Engineers appeared as Upper Schools but more towards private factories. These ES got all adapted Lower Schools of Ingeniero Tecnico. Recently both grew much in number and evolved, linked also to recognized Professions. Spanish society, into European Community, evolved across year 2000, in part highly well, but with severe discordances, that caused severe youth unemployment with 2008-2011 crisis. With Bolonia process high formal changes step in from 2010-11, accepted with intense adaptation. The Lower Schools are changing towards the Upper Schools, and both that have shifted since 2010-11 various 4-years careers (Grado), some included into the precedent Professions, and diverse Masters. Acceptation of them to get students has started relatively well, and will evolve, and acceptation of new grades for employment in Spain, Europe or outside will be essential. Each Grado has now quite rigid curricula and programs, MOODLE was introduced to connect pupils, some specific uses of Personal Computers are taught in each subject. Escuela de Agronomos centre, reorganized with its old name in its precedent buildings at entrance of Campus Moncloa, offers Grados of Agronomic Engineering and Science for various public and private activities for agriculture, Alimentary Engineering for alimentary activities and control, Agro-Environmental Engineering more related to environment activities, and in part Biotechnology also in laboratories in Campus Monte-Gancedo for Biotechnology of Plants and Computational Biotechnology. Curricula include Basics, Engineering, Practices, Visits, English, ?project of end of career?, Stays. Some masters will conduce to specific professional diploma, list includes now Agro-Engineering, Agro-Forestal Biotechnology, Agro and Natural Resources Economy, Complex Physical Systems, Gardening and Landscaping, Rural Genie, Phytogenetic Resources, Plant Genetic Resources, Environmental Technology for Sustainable Agriculture, Technology for Human Development and Cooperation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Las Field-Programmable Gate Arrays (FPGAs) SRAM se construyen sobre una memoria de configuración de tecnología RAM Estática (SRAM). Presentan múltiples características que las hacen muy interesantes para diseñar sistemas empotrados complejos. En primer lugar presentan un coste no-recurrente de ingeniería (NRE) bajo, ya que los elementos lógicos y de enrutado están pre-implementados (el diseño de usuario define su conexionado). También, a diferencia de otras tecnologías de FPGA, pueden ser reconfiguradas (incluso en campo) un número ilimitado de veces. Es más, las FPGAs SRAM de Xilinx soportan Reconfiguración Parcial Dinámica (DPR), la cual permite reconfigurar la FPGA sin interrumpir la aplicación. Finalmente, presentan una alta densidad de lógica, una alta capacidad de procesamiento y un rico juego de macro-bloques. Sin embargo, un inconveniente de esta tecnología es su susceptibilidad a la radiación ionizante, la cual aumenta con el grado de integración (geometrías más pequeñas, menores tensiones y mayores frecuencias). Esta es una precupación de primer nivel para aplicaciones en entornos altamente radiativos y con requisitos de alta confiabilidad. Este fenómeno conlleva una degradación a largo plazo y también puede inducir fallos instantáneos, los cuales pueden ser reversibles o producir daños irreversibles. En las FPGAs SRAM, los fallos inducidos por radiación pueden aparecer en en dos capas de arquitectura diferentes, que están físicamente superpuestas en el dado de silicio. La Capa de Aplicación (o A-Layer) contiene el hardware definido por el usuario, y la Capa de Configuración contiene la memoria de configuración y la circuitería de soporte. Los fallos en cualquiera de estas capas pueden hacer fracasar el sistema, lo cual puede ser ás o menos tolerable dependiendo de los requisitos de confiabilidad del sistema. En el caso general, estos fallos deben gestionados de alguna manera. Esta tesis trata sobre la gestión de fallos en FPGAs SRAM a nivel de sistema, en el contexto de sistemas empotrados autónomos y confiables operando en un entorno radiativo. La tesis se centra principalmente en aplicaciones espaciales, pero los mismos principios pueden aplicarse a aplicaciones terrenas. Las principales diferencias entre ambas son el nivel de radiación y la posibilidad de mantenimiento. Las diferentes técnicas para la gestión de fallos en A-Layer y C-Layer son clasificados, y sus implicaciones en la confiabilidad del sistema son analizados. Se proponen varias arquitecturas tanto para Gestores de Fallos de una capa como de doble-capa. Para estos últimos se propone una arquitectura novedosa, flexible y versátil. Gestiona las dos capas concurrentemente de manera coordinada, y permite equilibrar el nivel de redundancia y la confiabilidad. Con el objeto de validar técnicas de gestión de fallos dinámicas, se desarrollan dos diferentes soluciones. La primera es un entorno de simulación para Gestores de Fallos de C-Layer, basado en SystemC como lenguaje de modelado y como simulador basado en eventos. Este entorno y su metodología asociada permite explorar el espacio de diseño del Gestor de Fallos, desacoplando su diseño del desarrollo de la FPGA objetivo. El entorno incluye modelos tanto para la C-Layer de la FPGA como para el Gestor de Fallos, los cuales pueden interactuar a diferentes niveles de abstracción (a nivel de configuration frames y a nivel físico JTAG o SelectMAP). El entorno es configurable, escalable y versátil, e incluye capacidades de inyección de fallos. Los resultados de simulación para algunos escenarios son presentados y comentados. La segunda es una plataforma de validación para Gestores de Fallos de FPGAs Xilinx Virtex. La plataforma hardware aloja tres Módulos de FPGA Xilinx Virtex-4 FX12 y dos Módulos de Unidad de Microcontrolador (MCUs) de 32-bits de propósito general. Los Módulos MCU permiten prototipar Gestores de Fallos de C-Layer y A-Layer basados en software. Cada Módulo FPGA implementa un enlace de A-Layer Ethernet (a través de un switch Ethernet) con uno de los Módulos MCU, y un enlace de C-Layer JTAG con el otro. Además, ambos Módulos MCU intercambian comandos y datos a través de un enlace interno tipo UART. Al igual que para el entorno de simulación, se incluyen capacidades de inyección de fallos. Los resultados de pruebas para algunos escenarios son también presentados y comentados. En resumen, esta tesis cubre el proceso completo desde la descripción de los fallos FPGAs SRAM inducidos por radiación, pasando por la identificación y clasificación de técnicas de gestión de fallos, y por la propuesta de arquitecturas de Gestores de Fallos, para finalmente validarlas por simulación y pruebas. El trabajo futuro está relacionado sobre todo con la implementación de Gestores de Fallos de Sistema endurecidos para radiación. ABSTRACT SRAM-based Field-Programmable Gate Arrays (FPGAs) are built on Static RAM (SRAM) technology configuration memory. They present a number of features that make them very convenient for building complex embedded systems. First of all, they benefit from low Non-Recurrent Engineering (NRE) costs, as the logic and routing elements are pre-implemented (user design defines their connection). Also, as opposed to other FPGA technologies, they can be reconfigured (even in the field) an unlimited number of times. Moreover, Xilinx SRAM-based FPGAs feature Dynamic Partial Reconfiguration (DPR), which allows to partially reconfigure the FPGA without disrupting de application. Finally, they feature a high logic density, high processing capability and a rich set of hard macros. However, one limitation of this technology is its susceptibility to ionizing radiation, which increases with technology scaling (smaller geometries, lower voltages and higher frequencies). This is a first order concern for applications in harsh radiation environments and requiring high dependability. Ionizing radiation leads to long term degradation as well as instantaneous faults, which can in turn be reversible or produce irreversible damage. In SRAM-based FPGAs, radiation-induced faults can appear at two architectural layers, which are physically overlaid on the silicon die. The Application Layer (or A-Layer) contains the user-defined hardware, and the Configuration Layer (or C-Layer) contains the (volatile) configuration memory and its support circuitry. Faults at either layers can imply a system failure, which may be more ore less tolerated depending on the dependability requirements. In the general case, such faults must be managed in some way. This thesis is about managing SRAM-based FPGA faults at system level, in the context of autonomous and dependable embedded systems operating in a radiative environment. The focus is mainly on space applications, but the same principles can be applied to ground applications. The main differences between them are the radiation level and the possibility for maintenance. The different techniques for A-Layer and C-Layer fault management are classified and their implications in system dependability are assessed. Several architectures are proposed, both for single-layer and dual-layer Fault Managers. For the latter, a novel, flexible and versatile architecture is proposed. It manages both layers concurrently in a coordinated way, and allows balancing redundancy level and dependability. For the purpose of validating dynamic fault management techniques, two different solutions are developed. The first one is a simulation framework for C-Layer Fault Managers, based on SystemC as modeling language and event-driven simulator. This framework and its associated methodology allows exploring the Fault Manager design space, decoupling its design from the target FPGA development. The framework includes models for both the FPGA C-Layer and for the Fault Manager, which can interact at different abstraction levels (at configuration frame level and at JTAG or SelectMAP physical level). The framework is configurable, scalable and versatile, and includes fault injection capabilities. Simulation results for some scenarios are presented and discussed. The second one is a validation platform for Xilinx Virtex FPGA Fault Managers. The platform hosts three Xilinx Virtex-4 FX12 FPGA Modules and two general-purpose 32-bit Microcontroller Unit (MCU) Modules. The MCU Modules allow prototyping software-based CLayer and A-Layer Fault Managers. Each FPGA Module implements one A-Layer Ethernet link (through an Ethernet switch) with one of the MCU Modules, and one C-Layer JTAG link with the other. In addition, both MCU Modules exchange commands and data over an internal UART link. Similarly to the simulation framework, fault injection capabilities are implemented. Test results for some scenarios are also presented and discussed. In summary, this thesis covers the whole process from describing the problem of radiationinduced faults in SRAM-based FPGAs, then identifying and classifying fault management techniques, then proposing Fault Manager architectures and finally validating them by simulation and test. The proposed future work is mainly related to the implementation of radiation-hardened System Fault Managers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The modeling of complex dynamic systems depends on the solution of a differential equations system. Some problems appear because we do not know the mathematical expressions of the said equations. Enough numerical data of the system variables are known. The authors, think that it is very important to establish a code between the different languages to let them codify and decodify information. Coding permits us to reduce the study of some objects to others. Mathematical expressions are used to model certain variables of the system are complex, so it is convenient to define an alphabet code determining the correspondence between these equations and words in the alphabet. In this paper the authors begin with the introduction to the coding and decoding of complex structural systems modeling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Prentice-Hall international series in space technology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents an investigation, of synchronisation and causality, motivated by problems in computational neuroscience. The thesis addresses both theoretical and practical signal processing issues regarding the estimation of interdependence from a set of multivariate data generated by a complex underlying dynamical system. This topic is driven by a series of problems in neuroscience, which represents the principal background motive behind the material in this work. The underlying system is the human brain and the generative process of the data is based on modern electromagnetic neuroimaging methods . In this thesis, the underlying functional of the brain mechanisms are derived from the recent mathematical formalism of dynamical systems in complex networks. This is justified principally on the grounds of the complex hierarchical and multiscale nature of the brain and it offers new methods of analysis to model its emergent phenomena. A fundamental approach to study the neural activity is to investigate the connectivity pattern developed by the brain’s complex network. Three types of connectivity are important to study: 1) anatomical connectivity refering to the physical links forming the topology of the brain network; 2) effective connectivity concerning with the way the neural elements communicate with each other using the brain’s anatomical structure, through phenomena of synchronisation and information transfer; 3) functional connectivity, presenting an epistemic concept which alludes to the interdependence between data measured from the brain network. The main contribution of this thesis is to present, apply and discuss novel algorithms of functional connectivities, which are designed to extract different specific aspects of interaction between the underlying generators of the data. Firstly, a univariate statistic is developed to allow for indirect assessment of synchronisation in the local network from a single time series. This approach is useful in inferring the coupling as in a local cortical area as observed by a single measurement electrode. Secondly, different existing methods of phase synchronisation are considered from the perspective of experimental data analysis and inference of coupling from observed data. These methods are designed to address the estimation of medium to long range connectivity and their differences are particularly relevant in the context of volume conduction, that is known to produce spurious detections of connectivity. Finally, an asymmetric temporal metric is introduced in order to detect the direction of the coupling between different regions of the brain. The method developed in this thesis is based on a machine learning extensions of the well known concept of Granger causality. The thesis discussion is developed alongside examples of synthetic and experimental real data. The synthetic data are simulations of complex dynamical systems with the intention to mimic the behaviour of simple cortical neural assemblies. They are helpful to test the techniques developed in this thesis. The real datasets are provided to illustrate the problem of brain connectivity in the case of important neurological disorders such as Epilepsy and Parkinson’s disease. The methods of functional connectivity in this thesis are applied to intracranial EEG recordings in order to extract features, which characterize underlying spatiotemporal dynamics before during and after an epileptic seizure and predict seizure location and onset prior to conventional electrographic signs. The methodology is also applied to a MEG dataset containing healthy, Parkinson’s and dementia subjects with the scope of distinguishing patterns of pathological from physiological connectivity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Systems Engineering Group (SEG) at De Montfort University are developing the Boardman Soft Systems Methodology (BSSM) which allows complex human systems to be modelled, this work builds upon Checkland's Soft Systems Methodology (1981). The BSSM has been applied to the modelling of the systems engineering process as used in design and manufacturing companies. The BSSM is used to solicit information from a company and this data is then transformed into systemic diagrams (systemigrams). These systemigrams are posited to be accurate and concise representations of the system which has been modelled. This paper describes the collaboration between SEG and a manufacturing company (MC) in Leicester, England. The purpose of this collaboration was twofold. First, it was to create an objective view of the MC's processes, in the form of systemigrams. It was important to get this modelled by a source outside of the company, as it is difficult for people within a system being modelled to be unbiased. Secondly, it allowed a series of systemigrams to be produced which can then be subjected to simulation, for the purpose of aiding risk management decisions and to reduce the project cycle time

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work presented in this thesis describes an investigation into the production and properties of thin amorphous C films, with and without Cr doping, as a low wear / friction coating applicable to MEMS and other micro- and nano-engineering applications. Firstly, an assessment was made of the available testing techniques. Secondly, the optimised test methods were applied to a series of sputtered films of thickness 10 - 2000 nm in order to: (i) investigate the effect of thickness on the properties of coatingslcoating process (ii) investigate fundamental tribology at the nano-scale and (iii) provide a starting point for nanotribological coating optimisation at ultra low thickness. The use of XPS was investigated for the determination of Sp3/Sp2 carbon bonding. Under C 1s peak analysis, significant errors were identified and this was attributed to the absence of sufficient instrument resolution to guide the component peak structure (even with a high resolution instrument). A simple peak width analysis and correlation work with C KLL D value confirmed the errors. The use of XPS for Sp3/Sp2 was therefore limited to initial tentative estimations. Nanoindentation was shown to provide consistent hardness and reduced modulus results with depth (to < 7nm) when replicate data was suitably statistically processed. No significant pile-up or cracking of the films was identified under nanoindentation. Nanowear experimentation by multiple nanoscratching provided some useful information, however the conditions of test were very different to those expect for MEMS and micro- / nano-engineering systems. A novel 'sample oscillated nanoindentation' system was developed for testing nanowear under more relevant conditions. The films were produced in an industrial production coating line. In order to maximise the available information and to take account of uncontrolled process variation a statistical design of experiment procedure was used to investigate the effect of four key process control parameters. Cr doping was the most significant control parameter at all thicknesses tested and produced a softening effect and thus increased nanowear. Substrate bias voltage was also a significant parameter and produced hardening and a wear reducing effect at all thicknesses tested. The use of a Cr adhesion layer produced beneficial results at 150 nm thickness, but was ineffective at 50 nm. Argon flow to the coating chamber produced a complex effect. All effects reduced significantly with reducing film thickness. Classic fretting wear was produced at low amplitude under nanowear testing. Reciprocating sliding was produced at higher amplitude which generated three body abrasive wear and this was generally consistent with the Archard model. Specific wear rates were very low (typically 10-16 - 10-18 m3N-1m-1). Wear rates reduced exponentially with reduced film thickness and below (approx.) 20 nm, thickness was identified as the most important control of wear.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The CONNECT European project that started in February 2009 aims at dropping the interoperability barrier faced by today’s distributed systems. It does so by adopting a revolutionary approach to the seamless networking of digital systems, that is, synthesizing on the fly the connectors via which networked systems communicate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of increasingly powerful computers, which has enabled the use of windowing software, has also opened the way for the computer study, via simulation, of very complex physical systems. In this study, the main issues related to the implementation of interactive simulations of complex systems are identified and discussed. Most existing simulators are closed in the sense that there is no access to the source code and, even if it were available, adaptation to interaction with other systems would require extensive code re-writing. This work aims to increase the flexibility of such software by developing a set of object-oriented simulation classes, which can be extended, by subclassing, at any level, i.e., at the problem domain, presentation or interaction levels. A strategy, which involves the use of an object-oriented framework, concurrent execution of several simulation modules, use of a networked windowing system and the re-use of existing software written in procedural languages, is proposed. A prototype tool which combines these techniques has been implemented and is presented. It allows the on-line definition of the configuration of the physical system and generates the appropriate graphical user interface. Simulation routines have been developed for the chemical recovery cycle of a paper pulp mill. The application, by creation of new classes, of the prototype to the interactive simulation of this physical system is described. Besides providing visual feedback, the resulting graphical user interface greatly simplifies the interaction with this set of simulation modules. This study shows that considerable benefits can be obtained by application of computer science concepts to the engineering domain, by helping domain experts to tailor interactive tools to suit their needs.