862 resultados para computing systems design
Resumo:
Supervisory systems evolution makes the obtaining of significant information from processes more important in the way that the supervision systems' particular tasks are simplified. So, having signal treatment tools capable of obtaining elaborate information from the process data is important. In this paper, a tool that obtains qualitative data about the trends and oscillation of signals is presented. An application of this tool is presented as well. In this case, the tool, implemented in a computer-aided control systems design (CACSD) environment, is used in order to give to an expert system for fault detection in a laboratory plant
Resumo:
This paper describes a new reliable method, based on modal interval analysis (MIA) and set inversion (SI) techniques, for the characterization of solution sets defined by quantified constraints satisfaction problems (QCSP) over continuous domains. The presented methodology, called quantified set inversion (QSI), can be used over a wide range of engineering problems involving uncertain nonlinear models. Finally, an application on parameter identification is presented
Resumo:
This is a presentation that introduces the envisioning (set up) stage of a project or case study. it sets envisioning in a framework of software engineering and agile methodologies. The presentation also covers techniques for engaging with stakeholders in the domain of the project: building a co-designing team; information gathering; and the ethics of engagement. There is a short section on sprint planning and managing the project backlog (agile using a burndown chart.
Resumo:
Siguiendo un marco teórico integrado por varios autores entorno a los sistemas de control de gestión a lo largo de varias décadas, este trabajo pretende estudiar y contrastar la relación entre el desarrollo de dichos sistemas y los recursos y capacidades. Para tal fin, se desarrolló un estudio de caso en Teleperformance Colombia (TC), una empresa dedicada a prestación de servicio de tercerización de procesos o business process outsourcing. En el estudio se establecieron dos variables para evaluar el desarrollo de sistema de control de gestión: el diseño y el uso. A su vez, para cada uno de ellos, se definieron los indicadores y preguntas que permitieran realizar la observación y posterior análisis. De igual manera, se seleccionaron los recursos y capacidades más importantes para el desarrollo del negocio: innovación, aprendizaje organizacional y capital humano. Sobre estos se validó la existencia de relación con el SCG implementado en TC. La información obtenida fue analizada y contrastada a través de pruebas estadísticas ampliamente utilizadas en este tipo de estudios en las ciencias sociales. Finalmente, se analizaron seis posibles relaciones de las cuales, solamente se ratificó el relacionamiento positivo entre uso de sistema de control gestión y el recurso y capacidad capital humano. El resto de relacionamientos, refutaron los planteamientos teóricos que establecían cierta influencia de los sistemas de control de gestión sobre recursos y capacidades de innovación y aprendizaje organizacional.
Resumo:
The activated sludge process - the main biological technology usually applied to wastewater treatment plants (WWTP) - directly depends on live beings (microorganisms), and therefore on unforeseen changes produced by them. It could be possible to get a good plant operation if the supervisory control system is able to react to the changes and deviations in the system and can take the necessary actions to restore the system’s performance. These decisions are often based both on physical, chemical, microbiological principles (suitable to be modelled by conventional control algorithms) and on some knowledge (suitable to be modelled by knowledge-based systems). But one of the key problems in knowledge-based control systems design is the development of an architecture able to manage efficiently the different elements of the process (integrated architecture), to learn from previous cases (spec@c experimental knowledge) and to acquire the domain knowledge (general expert knowledge). These problems increase when the process belongs to an ill-structured domain and is composed of several complex operational units. Therefore, an integrated and distributed AI architecture seems to be a good choice. This paper proposes an integrated and distributed supervisory multi-level architecture for the supervision of WWTP, that overcomes some of the main troubles of classical control techniques and those of knowledge-based systems applied to real world systems
Resumo:
Purpose – The purpose of this paper is to investigate the concepts of intelligent buildings (IBs), and the opportunities offered by the application of computer-aided facilities management (CAFM) systems. Design/methodology/approach – In this paper definitions of IBs are investigated, particularly definitions that are embracing open standards for effective operational change, using a questionnaire survey. The survey further investigated the extension of CAFM to IBs concepts and the opportunities that such integrated systems will provide to facilities management (FM) professionals. Findings – The results showed variation in the understanding of the concept of IBs and the application of CAFM. The survey showed that 46 per cent of respondents use a CAFM system with a majority agreeing on the potential of CAFM in delivery of effective facilities. Research limitations/implications – The questionnaire survey results are limited to the views of the respondents within the context of FM in the UK. Practical implications – Following on the many definitions of an IB does not necessarily lead to technologies of equipment that conform to an open standard. This open standard and documentation of systems produced by vendors is the key to integrating CAFM with other building management systems (BMS) and further harnessing the application of CAFM for IBs. Originality/value – The paper gives experience-based suggestions for both demand and supply sides of the service procurement to gain the feasible benefits and avoid the currently hindering obstacles, as the paper provides insight to the current and future tools for the mobile aspects of FM. The findings are relevant for service providers and operators as well.
Resumo:
Purpose – This paper proposes assessing the context within which integrated logistic support (ILS) can be implemented for whole life performance of building services systems. Design/methodology/approach – The use of ILS within a through-life business model (TLBM) is a better framework to achieve a well-designed, constructed and managed product. However, for ILS to be implemented in a TLBM for building services systems, the practices, tools and techniques need certain contextual prerequisites tailored to suit the construction industry. These contextual prerequisites are discussed. Findings – The case studies conducted reinforced the contextual importance of prime contracting, partnering and team collaboration for the application of ILS techniques. The lack of data was a major hindrance to the full realisation of ILS techniques within the case studies. Originality/value – The paper concludes with the recognition of the value of these contextual prerequisites for the use of ILS techniques within the building industry.
Resumo:
Traditionally, applications and tools supporting collaborative computing have been designed only with personal computers in mind and support a limited range of computing and network platforms. These applications are therefore not well equipped to deal with network heterogeneity and, in particular, do not cope well with dynamic network topologies. Progress in this area must be made if we are to fulfil the needs of users and support the diversity, mobility, and portability that are likely to characterise group work in future. This paper describes a groupware platform called Coco that is designed to support collaboration in a heterogeneous network environment. The work demonstrates that progress in the p development of a generic supporting groupware is achievable, even in the context of heterogeneous and dynamic networks. The work demonstrates the progress made in the development of an underlying communications infrastructure, building on peer-to-peer concept and topologies to improve scalability and robustness.
Resumo:
Space applications are challenged by the reliability of parallel computing systems (FPGAs) employed in space crafts due to Single-Event Upsets. The work reported in this paper aims to achieve self-managing systems which are reliable for space applications by applying autonomic computing constructs to parallel computing systems. A novel technique, 'Swarm-Array Computing' inspired by swarm robotics, and built on the foundations of autonomic and parallel computing is proposed as a path to achieve autonomy. The constitution of swarm-array computing comprising for constituents, namely the computing system, the problem / task, the swarm and the landscape is considered. Three approaches that bind these constituents together are proposed. The feasibility of one among the three proposed approaches is validated on the SeSAm multi-agent simulator and landscapes representing the computing space and problem are generated using the MATLAB.
Resumo:
Studies in the literature have proposed techniques to facilitate pointing in graphical user interfaces through the use of proxy targets. Proxy targets effectively bring the target to the cursor, thereby reducing the distance that the cursor must travel. This paper describes a study which aims to provide an initial understanding of how older adults respond to proxy targets, and compares older with younger users. We found that users in both age groups adjusted to the proxy targets without difficulty, and there was no indication in the cursor trajectories that users were confused about which target, i.e. the original versus the proxy, was to be selected. In terms of times, preliminary results show that for younger users, proxies did not provide any benefits over direct selection, while for older users, times were increased with proxy targets. A full analysis of the movement times, error rates, throughput and subjective feedback is currently underway.
Resumo:
Processor virtualization for process migration in distributed parallel computing systems has formed a significant component of research on load balancing. In contrast, the potential of processor virtualization for fault tolerance has been addressed minimally. The work reported in this paper is motivated towards extending concepts of processor virtualization towards ‘intelligent cores’ as a means to achieve fault tolerance in distributed parallel computing systems. Intelligent cores are an abstraction of the hardware processing cores, with the incorporation of cognitive capabilities, on which parallel tasks can be executed and migrated. When a processing core executing a task is predicted to fail the task being executed is proactively transferred onto another core. A parallel reduction algorithm incorporating concepts of intelligent cores is implemented on a computer cluster using Adaptive MPI and Charm ++. Preliminary results confirm the feasibility of the approach.
Resumo:
Recent research in multi-agent systems incorporate fault tolerance concepts, but does not explore the extension and implementation of such ideas for large scale parallel computing systems. The work reported in this paper investigates a swarm array computing approach, namely 'Intelligent Agents'. A task to be executed on a parallel computing system is decomposed to sub-tasks and mapped onto agents that traverse an abstracted hardware layer. The agents intercommunicate across processors to share information during the event of a predicted core/processor failure and for successfully completing the task. The feasibility of the approach is validated by implementation of a parallel reduction algorithm on a computer cluster using the Message Passing Interface.