953 resultados para pacs: distributed system software


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Estamos viviendo la era de la Internetificación. A día de hoy, las conexiones a Internet se asumen presentes en nuestro entorno como una necesidad más. La Web, se ha convertido en un lugar de generación de contenido por los usuarios. Una información generada, que sobrepasa la idea con la que surgió esta, ya que en la mayoría de casos, su contenido no se ha diseñado más que para ser consumido por humanos, y no por máquinas. Esto supone un cambio de mentalidad en la forma en que diseñamos sistemas capaces de soportar una carga computacional y de almacenamiento que crece sin un fin aparente. Al mismo tiempo, vivimos un momento de crisis de la educación superior: los altos costes de una educación de calidad suponen una amenaza para el mundo académico. Mediante el uso de la tecnología, se puede lograr un incremento de la productividad, y una reducción en dichos costes en un campo, en el que apenas se ha avanzado desde el Renacimiento. En CloudRoom se ha diseñado una plataforma MOOC con una arquitectura ajustada a las últimas convenciones en Cloud Computing, que implica el uso de Servicios REST, bases de datos NoSQL, y que hace uso de las últimas recomendaciones del W3C en materia de desarrollo web y Linked Data. Para su construcción, se ha hecho uso de métodos ágiles de Ingeniería del Software, técnicas de Interacción Persona-Ordenador, y tecnologías de última generación como Neo4j, Redis, Node.js, AngularJS, Bootstrap, HTML5, CSS3 o Amazon Web Services. Se ha realizado un trabajo integral de Ingeniería Informática, combinando prácticamente la totalidad de aquellas áreas de conocimiento fundamentales en Informática. En definitiva se han ideado las bases de un sistema distribuido robusto, mantenible, con características sociales y semánticas, que puede ser ejecutado en múltiples dispositivos, y que es capaz de responder ante millones de usuarios. We are living through an age of Internetification. Nowadays, Internet connections are a utility whose presence one can simply assume. The web has become a place of generation of content by users. The information generated surpasses the notion with which the World Wide Web emerged because, in most cases, this content has been designed to be consumed by humans and not by machines. This fact implies a change of mindset in the way that we design systems; these systems should be able to support a computational and storage capacity that apparently grows endlessly. At the same time, our education system is in a state of crisis: the high costs of high-quality education threaten the academic world. With the use of technology, we could achieve an increase of productivity and quality, and a reduction of these costs in this field, which has remained largely unchanged since the Renaissance. In CloudRoom, a MOOC platform has been designed with an architecture that satisfies the last conventions on Cloud Computing; which involves the use of REST services, NoSQL databases, and uses the last recommendations from W3C in terms of web development and Linked Data. For its building process, agile methods of Software Engineering, Human-Computer Interaction techniques, and state of the art technologies such as Neo4j, Redis, Node.js, AngularJS, Bootstrap, HTML5, CSS3 or Amazon Web Services have been used. Furthermore, a comprehensive Informatics Engineering work has been performed, by combining virtually all of the areas of knowledge in Computer Science. Summarizing, the pillars of a robust, maintainable, and distributed system have been devised; a system with social and semantic capabilities, which runs in multiple devices, and scales to millions of users.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este proyecto consiste en el diseño e implementación un sistema domótico que puede ser instalado en una vivienda para controlar distintas variables ambientales y conseguir así la máxima comodidad de los habitantes de manera automática o manual según los gustos y necesidades de los usuarios. La característica principal de este sistema, es que cuenta con un funcionamiento distribuido donde entran en juego un servidor, encargado de tomar las decisiones generales para el comportamiento de la casa, y una serie de controladores esclavo cuya función es mantener constantes las variables ambientales con los valores fijados por el servidor. Así se consigue mantener la vivienda en una situación de bienestar constante para cualquier persona que se encuentre dentro. El sistema ha sido pensado de manera que se intenta reducir al máximo el cableado para facilitar su instalación por lo que la comunicación entre los distintos dispositivos se hace de manera inalámbrica por medio de un protocolo descrito en la norma IEEE 802.15.4 llamado ZigBee. Para ello se ha utilizado un módulo de comunicación wireless llamado Xbee, el cual permite la comunicación entre dos dispositivos. Para el control de dicho sistema distribuido se cuenta con una aplicación web, que mediante una interfaz gráfica permite al usuario controlar los distintos dispositivos dentro de la vivienda consiguiendo así controlar las variables ambientales a gusto del usuario. Dicha interfaz gráfica no depende de un software específico, sino que sólo es necesario un cliente http como podría ser Internet Explorer, Mozilla Firefox, Google Chrome, etc. Para integrar dicho sistema se ha usado un mini ordenador de bajo coste llamado RaspBerryPi, en el que se encuentra alojado un servidor Apache con el fin de gestionar y automatizar las variables ambientales. El control de los dipositivos encargados de modificar y estabilizar las variables ambientales se realiza mediante unos controladores genéricos implementados mediante mcontroladores 80C51F410, pertenecientes a la familia 80C51, y una serie de componentes y circuitería que permiten el correcto funcionamiento de éstos. Existen dos tipos de controladores distintos, los cuales son: Controlador Sensor: Encargados de las tomas de valores ambientales como puede ser la luz y la temperatura. Controladores Actuadores: Encargados de actuar sobre los dispositivos que modifican y estabilizan las variables ambientales como pueden ser la calefacción, tiras de leds de iluminación, persianas, alarmas, etc. El conjunto de la RaspBerryPi y los diferentes controladores forman el prototipo diseñado para este proyecto fin de carrera, el cual puede ser ampliado sencillamente para abarcar una amplia gama de posibilidades y funcionalidades dentro de la comodidad de una vivienda. ABSTRACT. The project described in this report consisted designing and implementing a home automation system that could be installed in a house in order to control environmental variables and thus get the maximum comfort of the inhabitant automatically or manually according to their tastes and needs. The main feature of this system consists in a distributed system, formed by a server which is responsible for making the main decisions of the actions performed inside the house. In addition, there are a series of slave controlers whose function consists in keeping the environmental variables within the values established by the server. Thus gets to keep the home in a situation of constant wellbeing to anyone who is inside. The system has been designed in order to reduce the amount of wire needed for the inter-connection of the devices, by means of wireless communication. The devices chosen for the solution are Xbee modules, which use the Zigbee protocol in order to comunicate one between each other. The Zigbee protocol is fully described in the IEEE 802.15.4 standard. A web application has been used to control the distributed system. This application allows users to control various devices inside the house and subsequently the different environmental variables. This implementation allows obtaining the maximum comfort by means of a very simple graphical interface. In addition, the Graphical User Interface (GUI) does not depend on any specific software. This means that it would only be necessary a http client (such as Internet Explorer, Mozilla Firefox, Google Chrome, etc.) for handling the application. The system has been integrated using a low-cost mini computer called RaspBerryPi.This computer has an Apache server allocated which allows to manage and to automatize the different environmental variables. Furthermore, for changing and stabilizing those variables, some generic controllers have been developed, based on mcontrollers 80C51F410. There have been developed mainly two different types of controllers: Sensor Controllers, responsible for measuring the different environmental values, such as light and temperature; and Actuator Controllers, which purpose is to modify and stabilize those environmental variables by actuating on the heating, the led lamps, the blinders, the alarm, etc. The combination of the RaspBerryPi and the different controllers conform the prototype designed during this project. Additionally, this solution could be easily expanded in order to intake further functionalities adapted to new needs that could arise in the future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Predicting failures in a distributed system based on previous events through logistic regression is a standard approach in literature. This technique is not reliable, though, in two situations: in the prediction of rare events, which do not appear in enough proportion for the algorithm to capture, and in environments where there are too many variables, as logistic regression tends to overfit on this situations; while manually selecting a subset of variables to create the model is error- prone. On this paper, we solve an industrial research case that presented this situation with a combination of elastic net logistic regression, a method that allows us to automatically select useful variables, a process of cross-validation on top of it and the application of a rare events prediction technique to reduce computation time. This process provides two layers of cross- validation that automatically obtain the optimal model complexity and the optimal mode l parameters values, while ensuring even rare events will be correctly predicted with a low amount of training instances. We tested this method against real industrial data, obtaining a total of 60 out of 80 possible models with a 90% average model accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En el presente proyecto se propone la definición e implementación de un subsistema de monitorización para un sistema de tiempo real distribuido. Este monitor supervisará el estado de todos los componentes software y hardware del sistema original, y permitirá el arranque y parada de cada componente individualmente o del subsistema completo. Constará de dos componentes básicos: un supervisor local para cada subsistema, y un supervisor central con interfaz gráfica. El supervisor local es un componente software asociado a cada subsistema que realizará las funciones de monitorización, arranque/parada de los componentes y envío de informes al supervisor central. Atenderá además a los comandos de arranque y parada provenientes del supervisor central. El supervisor central recibirá los informes de estado de cada uno de los supervisores locales y permitirá el arranque y parada de los subsistemas. Contará con un interfaz gráfico a modo de posición de control. El sistema será desarrollado íntegramente (salvo la posición gráfica) en ADA95, y podrá ejecutarse en cualquiera de las distribuciones Linux más extendidas. En el contexto de Ingeniería de Software, se seguirá un desarrollo en cascada, aportándose los requisitos, el diseño, la codificación y un plan de pruebas. Abstract In this project, the definition and implementation of a monitoring system is proposed for a previously defined real-time distributed system. This supervisory system will monitor the status of each subsystem and its software and hardware components. This new system will also be able to start and stop each individual component and start or stop the entire system. It will consist of two basic components: a local supervisor for each subsystem, and a central supervisor with a graphical unit interface (GUI). The local supervisor will be a software component attached to each original subsystem, which will perform functions such as components monitoring, start and stop the associated subsystem, and sending reports to the central supervisor. It also will attend the start and stop commands from the central supervisor. The central supervisor will receive status reports from each of the local supervisors and will allow starting and stopping the subsystems. It will offer a graphical interface to be used as a main control panel. The system will be developed in ADA 95 (except the graphical position), and should work on any of the most common Linux distributions. In the context of Software Engineering, the project will be developed following a waterfall life cycle. Reports on the stages of requirements, design, coding and testing plan shall be provided.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years many real time applications need to handle data streams. We consider the distributed environments in which remote data sources keep on collecting data from real world or from other data sources, and continuously push the data to a central stream processor. In these kinds of environments, significant communication is induced by the transmitting of rapid, high-volume and time-varying data streams. At the same time, the computing overhead at the central processor is also incurred. In this paper, we develop a novel filter approach, called DTFilter approach, for evaluating the windowed distinct queries in such a distributed system. DTFilter approach is based on the searching algorithm using a data structure of two height-balanced trees, and it avoids transmitting duplicate items in data streams, thus lots of network resources are saved. In addition, theoretical analysis of the time spent in performing the search, and of the amount of memory needed is provided. Extensive experiments also show that DTFilter approach owns high performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The kinematic mapping of a rigid open-link manipulator is a homomorphism between Lie groups. The homomorphisrn has solution groups that act on an inverse kinematic solution element. A canonical representation of solution group operators that act on a solution element of three and seven degree-of-freedom (do!) dextrous manipulators is determined by geometric analysis. Seven canonical solution groups are determined for the seven do! Robotics Research K-1207 and Hollerbach arms. The solution element of a dextrous manipulator is a collection of trivial fibre bundles with solution fibres homotopic to the Torus. If fibre solutions are parameterised by a scalar, a direct inverse funct.ion that maps the scalar and Cartesian base space coordinates to solution element fibre coordinates may be defined. A direct inverse pararneterisation of a solution element may be approximated by a local linear map generated by an inverse augmented Jacobian correction of a linear interpolation. The action of canonical solution group operators on a local linear approximation of the solution element of inverse kinematics of dextrous manipulators generates cyclical solutions. The solution representation is proposed as a model of inverse kinematic transformations in primate nervous systems. Simultaneous calibration of a composition of stereo-camera and manipulator kinematic models is under-determined by equi-output parameter groups in the composition of stereo-camera and Denavit Hartenberg (DH) rnodels. An error measure for simultaneous calibration of a composition of models is derived and parameter subsets with no equi-output groups are determined by numerical experiments to simultaneously calibrate the composition of homogeneous or pan-tilt stereo-camera with DH models. For acceleration of exact Newton second-order re-calibration of DH parameters after a sequential calibration of stereo-camera and DH parameters, an optimal numerical evaluation of DH matrix first order and second order error derivatives with respect to a re-calibration error function is derived, implemented and tested. A distributed object environment for point and click image-based tele-command of manipulators and stereo-cameras is specified and implemented that supports rapid prototyping of numerical experiments in distributed system control. The environment is validated by a hierarchical k-fold cross validated calibration to Cartesian space of a radial basis function regression correction of an affine stereo model. Basic design and performance requirements are defined for scalable virtual micro-kernels that broker inter-Java-virtual-machine remote method invocations between components of secure manageable fault-tolerant open distributed agile Total Quality Managed ISO 9000+ conformant Just in Time manufacturing systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The devising of a general engineering theory of multifunctional diagnostic systems for non-invasive medical spectrophotometry is an important and promising direction of modern biomedical engineering. We aim in this study to formalize in scientific engineering terms objectives for multifunctional laser non-invasive diagnostic system (MLNDS). The structure-functional model as well as a task-function of generalized MLNDS was formulated and developed. The key role of the system software for MLNDS general architecture at steps of ideological-technical designing has been proved. The basic principles of block-modules composition of MLNDS hardware are suggested as well. © 2011 Copyright Society of Photo-Optical Instrumentation Engineers (SPIE).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has been years since the introduction of the Dynamic Network Optimization (DNO) concept, yet the DNO development is still at its infant stage, largely due to a lack of breakthrough in minimizing the lengthy optimization runtime. Our previous work, a distributed parallel solution, has achieved a significant speed gain. To cater for the increased optimization complexity pressed by the uptake of smartphones and tablets, however, this paper examines the potential areas for further improvement and presents a novel asynchronous distributed parallel design that minimizes the inter-process communications. The new approach is implemented and applied to real-life projects whose results demonstrate an augmented acceleration of 7.5 times on a 16-core distributed system compared to 6.1 of our previous solution. Moreover, there is no degradation in the optimization outcome. This is a solid sprint towards the realization of DNO.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION In recent years computer systems have become increasingly complex and consequently the challenge of protecting these systems has become increasingly difficult. Various techniques have been implemented to counteract the misuse of computer systems in the form of firewalls, antivirus software and intrusion detection systems. The complexity of networks and dynamic nature of computer systems leaves current methods with significant room for improvement. Computer scientists have recently drawn inspiration from mechanisms found in biological systems and, in the context of computer security, have focused on the human immune system (HIS). The human immune system provides an example of a robust, distributed system that provides a high level of protection from constant attacks. By examining the precise mechanisms of the human immune system, it is hoped the paradigm will improve the performance of real intrusion detection systems. This paper presents an introduction to recent developments in the field of immunology. It discusses the incorporation of a novel immunological paradigm, Danger Theory, and how this concept is inspiring artificial immune systems (AIS). Applications within the context of computer security are outlined drawing direct reference to the underlying principles of Danger Theory and finally, the current state of intrusion detection systems is discussed and improvements suggested.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The OPIT program is briefly described. OPIT is a basis-set-optimising, self-consistent field, molecular orbital program for calculating properties of closed-shell ground states of atoms and molecules. A file handling technique is then put forward which enables core storage to be used efficiently in large FORTRAN scientific applications programs. Hashing and list processing techniques, of the type frequently used in writing system software and computer operating systems, are here applied to the creation of data files (integral label and value lists etc.). Files consist of a chained series of blocks which may exist in core or on backing store or both. Efficient use of core store is achieved and the processes of file deletion, file re-writing and garbage collection of unused blocks can be easily arranged. The scheme is exemplified with reference to the OPIT program. A subsequent paper will describe a job scheduling scheme for large programs of this sort.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION In recent years computer systems have become increasingly complex and consequently the challenge of protecting these systems has become increasingly difficult. Various techniques have been implemented to counteract the misuse of computer systems in the form of firewalls, antivirus software and intrusion detection systems. The complexity of networks and dynamic nature of computer systems leaves current methods with significant room for improvement. Computer scientists have recently drawn inspiration from mechanisms found in biological systems and, in the context of computer security, have focused on the human immune system (HIS). The human immune system provides an example of a robust, distributed system that provides a high level of protection from constant attacks. By examining the precise mechanisms of the human immune system, it is hoped the paradigm will improve the performance of real intrusion detection systems. This paper presents an introduction to recent developments in the field of immunology. It discusses the incorporation of a novel immunological paradigm, Danger Theory, and how this concept is inspiring artificial immune systems (AIS). Applications within the context of computer security are outlined drawing direct reference to the underlying principles of Danger Theory and finally, the current state of intrusion detection systems is discussed and improvements suggested.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With rising environmental alarm, the reduction of critical aircraft emissions including carbon dioxides (CO2) and nitrogen oxides (NOx) is one of most important aeronautical problems. There can be many possible attempts to solve such problem by designing new wing/aircraft shape, new efficient engine, etc. The paper rather provides a set of acceptable flight plans as a first step besides replacing current aircrafts. The paper investigates a green aircraft design optimisation in terms of aircraft range, mission fuel weight (CO2) and NOx using advanced Evolutionary Algorithms coupled to flight optimisation system software. Two multi-objective design optimisations are conducted to find the best set of flight plans for current aircrafts considering discretised altitude and Mach numbers without designing aircraft shape and engine types. The objectives of first optimisation are to maximise range of aircraft while minimising NOx with constant mission fuel weight. The second optimisation considers minimisation of mission fuel weight and NOx with fixed aircraft range. Numerical results show that the method is able to capture a set of useful trade-offs that reduce NOx and CO2 (minimum mission fuel weight).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a model to facilitate delegation, including ad-hoc delegation, in cross security domain activities. Specifically, this paper proposes a novel delegation constraint management model to manage and track delegation constraints across security domains. An algorithm to trace the authority of delegation constraints is introduced as well as an algorithm to form a delegation constraint set and detect/prevent potential conflicts. The algorithms and the management model are built upon a set of formal definitions of delegation constraints.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is not uncommon for enterprises today to be faced with the demand to integrate and incor- porate many different and possibly heterogeneous systems which are generally independently designed and developed, to allow seamless access. In effect, the integration of these systems results in one large whole system that must be able, at the same time, to maintain the local autonomy and to continue working as an independent entity. This problem has introduced a new distributed architecture called federated systems. The most challenging issue in federated systems is to find answers for the question of how to efficiently cooperate while preserving their autonomous characteristic, especially the security autonomy. This thesis intends to address this issue. The thesis reviews the evolution of the concept of federated systems and discusses the organisational characteristics as well as remaining security issues with the existing approaches. The thesis examines how delegation can be used as means to achieve better security, especially authorisation while maintaining autonomy for the participating member of the federation. A delegation taxonomy is proposed as one of the main contributions. The major contribution of this thesis is to study and design a mechanism to support dele- gation within and between multiple security domains with constraint management capability. A novel delegation framework is proposed including two modules: Delegation Constraint Man- agement module and Policy Management module. The first module is designed to effectively create, track and manage delegation constraints, especially for delegation processes which require re-delegation (indirect delegation). The first module employs two algorithms to trace the root authority of a delegation constraint chain and to prevent the potential conflict when creating a delegation constraint chain if necessary. The first module is designed for conflict prevention not conflict resolution. The second module is designed to support the first module via the policy comparison capability. The major function of this module is to provide the delegation framework the capability to compare policies and constraints (written under the format of a policy). The module is an extension of Lin et al.'s work on policy filtering and policy analysis. Throughout the thesis, some case studies are used as examples to illustrate the discussed concepts. These two modules are designed to capture one of the most important aspects of the delegation process: the relationships between the delegation transactions and the involved constraints, which are not very well addressed by the existing approaches. This contribution is significant because the relationships provide information to keep track and en- force the involved delegation constraints and, therefore, play a vital role in maintaining and enforcing security for transactions across multiple security domains.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

At NDSS 2012, Yan et al. analyzed the security of several challenge-response type user authentication protocols against passive observers, and proposed a generic counting based statistical attack to recover the secret of some counting based protocols given a number of observed authentication sessions. Roughly speaking, the attack is based on the fact that secret (pass) objects appear in challenges with a different probability from non-secret (decoy) objects when the responses are taken into account. Although they mentioned that a protocol susceptible to this attack should minimize this difference, they did not give details as to how this can be achieved barring a few suggestions. In this paper, we attempt to fill this gap by generalizing the attack with a much more comprehensive theoretical analysis. Our treatment is more quantitative which enables us to describe a method to theoretically estimate a lower bound on the number of sessions a protocol can be safely used against the attack. Our results include 1) two proposed fixes to make counting protocols practically safe against the attack at the cost of usability, 2) the observation that the attack can be used on non-counting based protocols too as long as challenge generation is contrived, 3) and two main design principles for user authentication protocols which can be considered as extensions of the principles from Yan et al. This detailed theoretical treatment can be used as a guideline during the design of counting based protocols to determine their susceptibility to this attack. The Foxtail protocol, one of the protocols analyzed by Yan et al., is used as a representative to illustrate our theoretical and experimental results.