902 resultados para Distributed computer-controlled systems
Resumo:
The increasing precision of current and future experiments in high-energy physics requires a likewise increase in the accuracy of the calculation of theoretical predictions, in order to find evidence for possible deviations of the generally accepted Standard Model of elementary particles and interactions. Calculating the experimentally measurable cross sections of scattering and decay processes to a higher accuracy directly translates into including higher order radiative corrections in the calculation. The large number of particles and interactions in the full Standard Model results in an exponentially growing number of Feynman diagrams contributing to any given process in higher orders. Additionally, the appearance of multiple independent mass scales makes even the calculation of single diagrams non-trivial. For over two decades now, the only way to cope with these issues has been to rely on the assistance of computers. The aim of the xloops project is to provide the necessary tools to automate the calculation procedures as far as possible, including the generation of the contributing diagrams and the evaluation of the resulting Feynman integrals. The latter is based on the techniques developed in Mainz for solving one- and two-loop diagrams in a general and systematic way using parallel/orthogonal space methods. These techniques involve a considerable amount of symbolic computations. During the development of xloops it was found that conventional computer algebra systems were not a suitable implementation environment. For this reason, a new system called GiNaC has been created, which allows the development of large-scale symbolic applications in an object-oriented fashion within the C++ programming language. This system, which is now also in use for other projects besides xloops, is the main focus of this thesis. The implementation of GiNaC as a C++ library sets it apart from other algebraic systems. Our results prove that a highly efficient symbolic manipulator can be designed in an object-oriented way, and that having a very fine granularity of objects is also feasible. The xloops-related parts of this work consist of a new implementation, based on GiNaC, of functions for calculating one-loop Feynman integrals that already existed in the original xloops program, as well as the addition of supplementary modules belonging to the interface between the library of integral functions and the diagram generator.
Resumo:
BACKGROUND: Engineered nanoparticles are becoming increasingly ubiquitous and their toxicological effects on human health, as well as on the ecosystem, have become a concern. Since initial contact with nanoparticles occurs at the epithelium in the lungs (or skin, or eyes), in vitro cell studies with nanoparticles require dose-controlled systems for delivery of nanoparticles to epithelial cells cultured at the air-liquid interface. RESULTS: A novel air-liquid interface cell exposure system (ALICE) for nanoparticles in liquids is presented and validated. The ALICE generates a dense cloud of droplets with a vibrating membrane nebulizer and utilizes combined cloud settling and single particle sedimentation for fast (~10 min; entire exposure), repeatable (<12%), low-stress and efficient delivery of nanoparticles, or dissolved substances, to cells cultured at the air-liquid interface. Validation with various types of nanoparticles (Au, ZnO and carbon black nanoparticles) and solutes (such as NaCl) showed that the ALICE provided spatially uniform deposition (<1.6% variability) and had no adverse effect on the viability of a widely used alveolar human epithelial-like cell line (A549). The cell deposited dose can be controlled with a quartz crystal microbalance (QCM) over a dynamic range of at least 0.02-200 mug/cm(2). The cell-specific deposition efficiency is currently limited to 0.072 (7.2% for two commercially available 6-er transwell plates), but a deposition efficiency of up to 0.57 (57%) is possible for better cell coverage of the exposure chamber. Dose-response measurements with ZnO nanoparticles (0.3-8.5 mug/cm(2)) showed significant differences in mRNA expression of pro-inflammatory (IL-8) and oxidative stress (HO-1) markers when comparing submerged and air-liquid interface exposures. Both exposure methods showed no cellular response below 1 mug/cm(2 )ZnO, which indicates that ZnO nanoparticles are not toxic at occupationally allowed exposure levels. CONCLUSION: The ALICE is a useful tool for dose-controlled nanoparticle (or solute) exposure of cells at the air-liquid interface. Significant differences between cellular response after ZnO nanoparticle exposure under submerged and air-liquid interface conditions suggest that pharmaceutical and toxicological studies with inhaled (nano-)particles should be performed under the more realistic air-liquid interface, rather than submerged cell conditions.
Resumo:
Abstract Inhalation of ambient air particles or engineered nanoparticles (NP) handled as powders, dispersions or sprays in industrial processes and contained in consumer products pose a potential and largely unknown risk for incidental exposure. For efficient, economical and ethically sound evaluation of health hazards by inhaled nanomaterials, animal-free and realistic in vitro test systems are desirable. The new Nano Aerosol Chamber for in-vitro Toxicity studies (NACIVT) has been developed and fully characterized regarding its performance. NACIVT features a computer-controlled temperature and humidity conditioning, preventing cellular stress during exposure and allowing long-term exposures. Airborne NP are deposited out of a continuous air stream simultaneously on up to 24 cell cultures on Transwell® inserts, allowing high-throughput screening. In NACIVT, polystyrene as well as silver particles were deposited uniformly and efficiently on all 24 Transwell® inserts. Particle-cell interaction studies confirmed that deposited particles reach the cell surface and can be taken up by cells. As demonstrated in control experiments, there was no evidence for any adverse effects on human bronchial epithelial cells (BEAS-2B) due to the exposure treatment in NACIVT. The new, fully integrated and transportable deposition chamber NACIVT provides a promising tool for reliable, acute and sub-acute dose-response studies of (nano)particles in air-exposed tissues cultured at the air-liquid interface.
Resumo:
Dendritic computation is a term that has been in neuro physiological research for a long time [1]. It is still controversial and far for been clarified within the concepts of both computation and neurophysiology [2], [3]. In any case, it hasnot been integrated neither in a formal computational scheme or structure, nor into formulations of artificial neural nets. Our objective here is to formulate a type of distributed computation that resembles dendritic trees, in such a way that it shows the advantages of neural network distributed computation, mostly the reliability that is shown under the existence of holes (scotomas) in the computing net, without ?blind spots?.
Resumo:
Distributed parallel execution systems speed up applications by splitting tasks into processes whose execution is assigned to different receiving nodes in a high-bandwidth network. On the distributing side, a fundamental problem is grouping and scheduling such tasks such that each one involves sufñcient computational cost when compared to the task creation and communication costs and other such practical overheads. On the receiving side, an important issue is to have some assurance of the correctness and characteristics of the code received and also of the kind of load the particular task is going to pose, which can be specified by means of certificates. In this paper we present in a tutorial way a number of general solutions to these problems, and illustrate them through their implementation in the Ciao multi-paradigm language and program development environment. This system includes facilities for parallel and distributed execution, an assertion language for specifying complex programs properties (including safety and resource-related properties), and compile-time and run-time tools for performing automated parallelization and resource control, as well as certification of programs with resource consumption assurances and efñcient checking of such certificates.
Resumo:
We recover and develop some robotic systems concepts (on the light of present systems tools) that were originated for an intended Mars Rover in the sixties of the last century at the Instrumentation Laboratory of MIT, where one of the authors was involved. The basic concepts came from the specifications for a type of generalized robot inspired in the structure of the vertebrate nervous systems, where the decision system was based in the structure and function of the Reticular Formation (RF). The vertebrate RF is supposed to commit the whole organism to one among various modes of behavior, so taking the decisions about the present overall task. That is, it is a kind of control and command system. In this concepts updating, the basic idea is that the RF comprises a set of computing units such that each computing module receives information only from a reduced part of the overall, little processed sensory inputs. Each computing unit is capable of both general diagnostics about overall input situations and of specialized diagnostics according to the values of a concrete subset of the input lines. Slave systems to this command and control computer, there are the sensors, the representations of external environment, structures for modeling and planning and finally, the effectors acting in the external world.
Resumo:
La computación distribuida ha estado presente desde hace unos cuantos años, pero es quizás en la actualidad cuando está contando con una mayor repercusión. En los últimos años el modelo de computación en la nube (Cloud computing) ha ganado mucha popularidad, prueba de ello es la cantidad de productos existentes. Todo sistema informático requiere ser controlado a través de sistemas de monitorización que permiten conocer el estado del mismo, de tal manera que pueda ser gestionado fácilmente. Hoy en día la mayoría de los productos de monitorización existentes limitan a la hora de visualizar una representación real de la arquitectura de los sistemas a monitorizar, lo que puede dificultar la tarea de los administradores. Es decir, la visualización que proporcionan de la arquitectura del sistema, en muchos casos se ve influenciada por el diseño del sistema de visualización, lo que impide ver los niveles de la arquitectura y las relaciones entre estos. En este trabajo se presenta un sistema de monitorización para sistemas distribuidos o Cloud, que pretende dar solución a esta problemática, no limitando la representación de la arquitectura del sistema a monitorizar. El sistema está formado por: agentes, que se encargan de la tarea de recolección de las métricas del sistema monitorizado; un servidor, al que los agentes le envían las métricas para que las almacenen en una base de datos; y una aplicación web, a través de la que se visualiza toda la información. El sistema ha sido probado satisfactoriamente con la monitorización de CumuloNimbo, una plataforma como servicio (PaaS), que ofrece interfaz SQL y procesamiento transaccional altamente escalable sobre almacenes clave valor. Este trabajo describe la arquitectura del sistema de monitorización, y en concreto, el desarrollo de la principal contribución al sistema, la aplicación web. ---ABSTRACT---Distributed computing has been around for quite a long time, but now it is becoming more and more important. In the last few years, cloud computing, a branch of distributed computing has become very popular, as its different products in the market can prove. Every computing system requires to be controlled through monitoring systems to keep them functioning correctly. Currently, most of the monitoring systems in the market only provide a view of the architectures of the systems monitored, which in most cases do not permit having a real view of the system. This lack of vision can make administrators’ tasks really difficult. If they do not know the architecture perfectly, controlling the system based on the view that the monitoring system provides is extremely complicated. The project introduces a new monitoring system for distributed or Cloud systems, which shows the real architecture of the system. This new system is composed of several elements: agents, which collect the metrics of the monitored system; a server, which receives the metrics from the agents and saves them in a database; and a web application, which shows all the data collected in an easy way. The monitoring system has been tested successfully with Cumulonimbo. CumuloNimbo is a platform as a service (PaaS) which offers an SQL interface and a high-scalable transactional process. This platform works over key-value storage. This project describes the architecture of the monitoring system, especially, the development of the web application, which is the main contribution to the system.
Resumo:
Thesis (M. S.)--University of Illinois at Urbana-Champaign.
Resumo:
This research is concerned with the development of distributed real-time systems, in which software is used for the control of concurrent physical processes. These distributed control systems are required to periodically coordinate the operation of several autonomous physical processes, with the property of an atomic action. The implementation of this coordination must be fault-tolerant if the integrity of the system is to be maintained in the presence of processor or communication failures. Commit protocols have been widely used to provide this type of atomicity and ensure consistency in distributed computer systems. The objective of this research is the development of a class of robust commit protocols, applicable to the coordination of distributed real-time control systems. Extended forms of the standard two phase commit protocol, that provides fault-tolerant and real-time behaviour, were developed. Petri nets are used for the design of the distributed controllers, and to embed the commit protocol models within these controller designs. This composition of controller and protocol model allows the analysis of the complete system in a unified manner. A common problem for Petri net based techniques is that of state space explosion, a modular approach to both the design and analysis would help cope with this problem. Although extensions to Petri nets that allow module construction exist, generally the modularisation is restricted to the specification, and analysis must be performed on the (flat) detailed net. The Petri net designs for the type of distributed systems considered in this research are both large and complex. The top down, bottom up and hybrid synthesis techniques that are used to model large systems in Petri nets are considered. A hybrid approach to Petri net design for a restricted class of communicating processes is developed. Designs produced using this hybrid approach are modular and allow re-use of verified modules. In order to use this form of modular analysis, it is necessary to project an equivalent but reduced behaviour on the modules used. These projections conceal events local to modules that are not essential for the purpose of analysis. To generate the external behaviour, each firing sequence of the subnet is replaced by an atomic transition internal to the module, and the firing of these transitions transforms the input and output markings of the module. Thus local events are concealed through the projection of the external behaviour of modules. This hybrid design approach preserves properties of interest, such as boundedness and liveness, while the systematic concealment of local events allows the management of state space. The approach presented in this research is particularly suited to distributed systems, as the underlying communication model is used as the basis for the interconnection of modules in the design procedure. This hybrid approach is applied to Petri net based design and analysis of distributed controllers for two industrial applications that incorporate the robust, real-time commit protocols developed. Temporal Petri nets, which combine Petri nets and temporal logic, are used to capture and verify causal and temporal aspects of the designs in a unified manner.
Resumo:
This thesis describes a detailed study of advanced fibre grating devices using Bragg (FBG) and long-period (LPG) structures and their applications in optical communications and sensing. The major contributions presented in this thesis are summarised below. One of the most important contributions from the research work presented in this thesis is a systematic theoretical study of many distinguishing structures of fibre gratings. Starting from the Maxwell equations, the coupled-mode equations for both FBG and LPG were derived and the mode-overlap factor was analytically discussed. Computing simulation programmes utilising matrix transform method based on the models built upon the coupled-mode equations were developed, enabling simulations of spectral response in terms of reflectivity, bandwidth, sidelobes and dispersion of gratings of different structures including uniform and chirped, phase-shifted, Moiré, sampled Bragg gratings, phase-shifted and cascaded long-period gratings. Although the majority of these structures were modelled numerically, analytical expressions for some complex structures were developed with a clear physical picture. Several apodisation functions were proposed to improve sidelobe suppression, which guided effective production of practical devices for demanding applications. Fibre grating fabrication is the other major part involved in the Ph.D. programme. Both the holographic and scan-phase-mask methods were employed to fabricate Bragg and long-period gratings of standard and novel structures. Significant improvements were particularly made in the scan-phase-mask method to enable the arbitrarily tailoring of the spectral response of grating devices. Two specific techniques - slow-shifting and fast-dithering the phase-mask implemented by a computer controlled piezo - were developed to write high quality phase-shifted, sampled and apodised gratings. A large number of LabVIEW programmes were constructed to implement standard and novel fabrication techniques. In addition, some fundamental studies of grating growth in relating to the UV exposure and hydrogenation induced index were carried out. In particular, Type IIa gratings in non-hydrogenated B/Ge co-doped fibres and a re-generated grating in hydrogenated B/Ge fibre were investigated, showing a significant observation of thermal coefficient reduction. Optical sensing applications utilising fibre grating devices form the third major part of the research work presented in this thesis. Several experiments of novel sensing and sensing-demodulating were implemented. For the first time, an intensity and wavelength dual-coding interrogation technique was demonstrated showing significantly enhanced capacity of grating sensor multiplexing. Based on the mode-splitting measurement, instead of using conventional wavelength-shifting detection technique, successful demonstrations were also made for optical load and bend sensing of ultra-high sensitivity employing LPG structures. In addition, edge-filters and low-loss high-rejection bandpass filters of 50nm stop-band were fabricated for application in optical sensing and high-speed telecommunication systems
Resumo:
Hard real-time systems are a class of computer control systems that must react to demands of their environment by providing `correct' and timely responses. Since these systems are increasingly being used in systems with safety implications, it is crucial that they are designed and developed to operate in a correct manner. This thesis is concerned with developing formal techniques that allow the specification, verification and design of hard real-time systems. Formal techniques for hard real-time systems must be capable of capturing the system's functional and performance requirements, and previous work has proposed a number of techniques which range from the mathematically intensive to those with some mathematical content. This thesis develops formal techniques that contain both an informal and a formal component because it is considered that the informality provides ease of understanding and the formality allows precise specification and verification. Specifically, the combination of Petri nets and temporal logic is considered for the specification and verification of hard real-time systems. Approaches that combine Petri nets and temporal logic by allowing a consistent translation between each formalism are examined. Previously, such techniques have been applied to the formal analysis of concurrent systems. This thesis adapts these techniques for use in the modelling, design and formal analysis of hard real-time systems. The techniques are applied to the problem of specifying a controller for a high-speed manufacturing system. It is shown that they can be used to prove liveness and safety properties, including qualitative aspects of system performance. The problem of verifying quantitative real-time properties is addressed by developing a further technique which combines the formalisms of timed Petri nets and real-time temporal logic. A unifying feature of these techniques is the common temporal description of the Petri net. A common problem with Petri net based techniques is the complexity problems associated with generating the reachability graph. This thesis addresses this problem by using concurrency sets to generate a partial reachability graph pertaining to a particular state. These sets also allows each state to be checked for the presence of inconsistencies and hazards. The problem of designing a controller for the high-speed manufacturing system is also considered. The approach adopted mvolves the use of a model-based controller: This type of controller uses the Petri net models developed, thus preservIng the properties already proven of the controller. It. also contains a model of the physical system which is synchronised to the real application to provide timely responses. The various way of forming the synchronization between these processes is considered and the resulting nets are analysed using concurrency sets.
Resumo:
The thesis describes an investigation into methods for the specification, design and implementation of computer control systems for flexible manufacturing machines comprising multiple, independent, electromechanically-driven mechanisms. An analysis is made of the elements of conventional mechanically-coupled machines in order that the operational functions of these elements may be identified. This analysis is used to define the scope of requirements necessary to specify the format, function and operation of a flexible, independently driven mechanism machine. A discussion of how this type of machine can accommodate modern manufacturing needs of high-speed and flexibility is presented. A sequential method of capturing requirements for such machines is detailed based on a hierarchical partitioning of machine requirements from product to independent drive mechanism. A classification of mechanisms using notations, including Data flow diagrams and Petri-nets, is described which supports capture and allows validation of requirements. A generic design for a modular, IDM machine controller is derived based upon hierarchy of control identified in these machines. A two mechanism experimental machine is detailed which is used to demonstrate the application of the specification, design and implementation techniques. A computer controller prototype and a fully flexible implementation for the IDM machine, based on Petri-net models described using the concurrent programming language Occam, is detailed. The ability of this modular computer controller to support flexible, safe and fault-tolerant operation of the two intermittent motion, discrete-synchronisation independent drive mechanisms is presented. The application of the machine development methodology to industrial projects is established.
Resumo:
The present study describes a pragmatic approach to the implementation of production planning and scheduling techniques in foundries of all types and looks at the use of `state-of-the-art' management control and information systems. Following a review of systems for the classification of manufacturing companies, a definitive statement is made which highlights the important differences between foundries (i.e. `component makers') and other manufacturing companies (i.e. `component buyers'). An investigation of the manual procedures which are used to plan and control the manufacture of components reveals the inherent problems facing foundry production management staff, which suggests the unsuitability of many manufacturing techniques which have been applied to general engineering companies. From the literature it was discovered that computer-assisted systems are required which are primarily `information-based' rather than `decision based', whilst the availability of low-cost computers and `packaged-software' has enabled foundries to `get their feet wet' without the financial penalties which characterized many of the early attempts at computer-assistance (i.e. pre-1980). Moreover, no evidence of a single methodology for foundry scheduling emerged from the review. A philosophy for the development of a CAPM system is presented, which details the essential information requirements and puts forward proposals for the subsequent interactions between types of information and the sub-system of CAPM which they support. The work developed was oriented specifically at the functions of production planning and scheduling and introduces the concept of `manual interaction' for effective scheduling. The techniques developed were designed to use the information which is readily available in foundries and were found to be practically successful following the implementation of the techniques into a wide variety of foundries. The limitations of the techniques developed are subsequently discussed within the wider issues which form a CAPM system, prior to a presentation of the conclusions which can be drawn from the study.
Resumo:
Two alternative work designs are identified for operators of stand-alone advanced manufacturing technology (AMT). In the case of specialist control, operators are limited to running and monitoring the technology, with operating problems handled by specialists, such as engineers. In the case of operator control, operators are given much broader responsibilities and deal directly with the majority of operating problems encountered. The hypothesis that operator control would promote better performance and psychological well-being than would specialist control (which is more prevalent) was tested in a longitudinal field study involving work redesign for operators of computer-controlled assembly machines. Change from specialist to operator control reduced downtime, especially for high-variance systems, and was associated with greater intrinsic job satisfaction and less perceived work pressure. The implications of these findings for both small and large-scale applications of AMT are discussed.
Resumo:
This paper describes the work undertaken in the Scholarly Ontologies Project. The aim of the project has been to develop a computational approach to support scholarly sensemaking, through interpretation and argumentation, enabling researchers to make claims: to describe and debate their view of a document's key contributions and relationships to the literature. The project has investigated the technicalities and practicalities of capturing conceptual relations, within and between conventional documents in terms of abstract ontological structures. In this way, we have developed a new kind of index to distributed digital library systems. This paper reports a case study undertaken to test the sensemaking tools developed by the Scholarly Ontologies project. The tools used were ClaiMapper, which allows the user to sketch argument maps of individual papers and their connections, ClaiMaker, a server on which such models can be stored and saved, which provides interpretative services to assist the querying of argument maps across multiple papers and ClaimFinder, a novice interface to the search services in ClaiMaker.