935 resultados para Enterprise application integration (Computer systems)
Resumo:
Recently honeycomb meshes have been considered as alternative candidates for interconnection networks in parallel and distributed computer systems. This paper presents a solution to one of the open problems about honeycomb meshes—the so-called three disjoint path problem. The problem requires minimizing the length of the longest of any three disjoint paths between 3-degree nodes. This solution provides information on the re-routing of traffic along the network in the presence of faults.
Resumo:
Energy-efficient computing remains a critical challenge across the wide range of future data-processing engines — from ultra-low-power embedded systems to servers, mainframes, and supercomputers. In addition, the advent of cloud and mobile computing as well as the explosion of IoT technologies have created new research challenges in the already complex, multidimensional space of modern and future computer systems. These new research challenges led to the establishment of the IEEE Rebooting Computing Initiative, which specifically addresses novel low-power solutions and technologies as one of the main areas of concern.With this in mind, we thought it timely to survey the state of the art of energy-efficient computing.
Resumo:
Réalisé en cotutelle avec l'École normale supérieure de Cachan – Université Paris-Saclay
Resumo:
Oikean tiedon siirtyminen oikeaan aikaan, sekä laadukkaan työn tekeminen yrityksen tilaus-toimitusketjun jokaisessa vaiheessa, ovat avaintekijöitä arvolupauksen ja laadun täyttämiseen asiakkaalle. Diplomityön tavoite on kehittää pk-yritykselle työkalut parempaan tiedon hallintaan ja laadukkaan työn tekemiseen toiminnanohjausjärjestelmässä. Tutkimusmenetelmänä diplomityössä käytettiin toimintatutkimusta, jossa diplomityön tekijä osallistui kohdeyrityksen päivittäiseen työn tekemiseen neljän kuukauden ajan. Tutkimuksen tiedon keräämisessä käytettiin myös puolistrukturoitua haastattelua, sekä kyselytutkimuksella. Tutkimusote työssä on kvalitatiivinen eli laadullinen tutkimusote. Työ koostuu teoriaosasta sekä soveltavasta osasta, jonka jälkeen työn tulokset esitetään tiivistetysti johtopäätöksissä ja yhteenvedossa. Toiminnanohjausjärjestelmät keräävät ja tallentavat tietoa, jota työntekijät ja yrityksen rajapinnoilla työskentelevät ihmiset siihen syöttävät. Onkin äärimmäisen tärkeää, että yrityksellä on kuvatut yhtenäiset toimintamallit prosesseille, joita he käyttävät tiedon tallentamisessa järjestelmiin. Tässä diplomityössä tutkitaan pk-yrityksen nykyiset toimintamallit tiedon tallentamisesta toiminnanohjausjärjestelmään, jonka jälkeen kehitetään yhtenäiset ohjeet toiminnanohjausjärjestelmään syötetystä myyntitilaussopimuksesta. Teoriaosuudessa esitetään laatu eri näkökulmista ja mitä laadunhallintajärjestelmät ovat ja kuinka niitä kehitetään. Teoriaosassa myös avataan tilausohjautuvan tuotannon periaatteet, sekä toiminnanohjausjärjestelmän merkitys liiketoiminnalle. Teoriaosuudella pohjustetaan soveltavaa osuutta, jossa ongelma-analyysin jälkeen kehitetään yritykseen oma laadunhallintajärjestelmä, sekä uudet työmallit tiedonvaihtoon ja sen tallentamiseen. Tuloksena on myös toiminnanohjausjärjestelmän käytön tehostuminen ohjelmistotoimittajan tekemänä. Ohjelmasta karsittiin turhat nimikkeistöt ja sen konfigurointia tehostettiin. Työn tuloksena saatiin työohjeet ydinprosessien suorittamiseen, sekä oma laadunhallintajärjestelmä tukemaan yrityksen ydin- ja tukiprosesseja, sekä tiedonhallintaa.
Resumo:
Abstract not available
Resumo:
To meet electricity demand, electric utilities develop growth strategies for generation, transmission, and distributions systems. For a long time those strategies have been developed by applying least-cost methodology, in which the cheapest stand-alone resources are simply added, instead of analyzing complete portfolios. As a consequence, least-cost methodology is biased in favor of fossil fuel-based technologies, completely ignoring the benefits of adding non-fossil fuel technologies to generation portfolios, especially renewable energies. For this reason, this thesis introduces modern portfolio theory (MPT) to gain a more profound insight into a generation portfolio’s performance using generation cost and risk metrics. We discuss all necessary assumptions and modifications to this finance technique for its application within power systems planning, and we present a real case of analysis. Finally, the results of this thesis are summarized, pointing out the main benefits and the scope of this new tool in the context of electricity generation planning.
Resumo:
Stand-alone and networked surgical virtual reality based simulators have been proposed as means to train surgical skills with or without a supervisor nearby the student or trainee -- However, surgical skills teaching in medicine schools and hospitals is changing, requiring the development of new tools to focus on: (i) importance of mentors role, (ii) teamwork skills and (iii) remote training support -- For these reasons, a surgical simulator should not only allow the training involving a student and an instructor that are located remotely, but also the collaborative training of users adopting different medical roles during the training sesión -- Collaborative Networked Virtual Surgical Simulators (CNVSS) allow collaborative training of surgical procedures where remotely located users with different surgical roles can take part in the training session -- To provide successful training involving good collaborative performance, CNVSS should handle heterogeneity factors such as users’ machine capabilities and network conditions, among others -- Several systems for collaborative training of surgical procedures have been developed as research projects -- To the best of our knowledge none has focused on handling heterogeneity in CNVSS -- Handling heterogeneity in this type of collaborative sessions is important because not all remotely located users have homogeneous internet connections, nor the same interaction devices and displays, nor the same computational resources, among other factors -- Additionally, if heterogeneity is not handled properly, it will have an adverse impact on the performance of each user during the collaborative sesión -- In this document, the development of a context-aware architecture for collaborative networked virtual surgical simulators, in order to handle the heterogeneity involved in the collaboration session, is proposed -- To achieve this, the following main contributions are accomplished in this thesis: (i) Which and how infrastructure heterogeneity factors affect the collaboration of two users performing a virtual surgical procedure were determined and analyzed through a set of experiments involving users collaborating, (ii) a context-aware software architecture for a CNVSS was proposed and implemented -- The architecture handles heterogeneity factors affecting collaboration, applying various adaptation mechanisms and finally, (iii) A mechanism for handling heterogeneity factors involved in a CNVSS is described, implemented and validated in a set of testing scenarios
Resumo:
Advances in FPGA technology and higher processing capabilities requirements have pushed to the emerge of All Programmable Systems-on-Chip, which incorporate a hard designed processing system and a programmable logic that enable the development of specialized computer systems for a wide range of practical applications, including data and signal processing, high performance computing, embedded systems, among many others. To give place to an infrastructure that is capable of using the benefits of such a reconfigurable system, the main goal of the thesis is to implement an infrastructure composed of hardware, software and network resources, that incorporates the necessary services for the operation, management and interface of peripherals, that coompose the basic building blocks for the execution of applications. The project will be developed using a chip from the Zynq-7000 All Programmable Systems-on-Chip family.
Resumo:
The growing demand for large-scale virtualization environments, such as the ones used in cloud computing, has led to a need for efficient management of computing resources. RAM memory is the one of the most required resources in these environments, and is usually the main factor limiting the number of virtual machines that can run on the physical host. Recently, hypervisors have brought mechanisms for transparent memory sharing between virtual machines in order to reduce the total demand for system memory. These mechanisms “merge” similar pages detected in multiple virtual machines into the same physical memory, using a copy-on-write mechanism in a manner that is transparent to the guest systems. The objective of this study is to present an overview of these mechanisms and also evaluate their performance and effectiveness. The results of two popular hypervisors (VMware and KVM) using different guest operating systems (Linux and Windows) and different workloads (synthetic and real) are presented herein. The results show significant performance differences between hypervisors according to the guest system workloads and execution time.
Resumo:
Human operators are unique in their decision making capability, judgment and nondeterminism. Their sense of judgment, unpredictable decision procedures, susceptibility to environmental elements can cause them to erroneously execute a given task description to operate a computer system. Usually, a computer system is protected against some erroneous human behaviors by having necessary safeguard mechanisms in place. But some erroneous human operator behaviors can lead to severe or even fatal consequences especially in safety critical systems. A generalized methodology that can allow modeling and analyzing the interactions between computer systems and human operators where the operators are allowed to deviate from their prescribed behaviors will provide a formal understanding of the robustness of a computer system against possible aberrant behaviors by its human operators. We provide several methodology for assisting in modeling and analyzing human behaviors exhibited while operating computer systems. Every human operator is usually given a specific recommended set of guidelines for operating a system. We first present process algebraic methodology for modeling and verifying recommended human task execution behavior. We present how one can perform runtime monitoring of a computer system being operated by a human operator for checking violation of temporal safety properties. We consider the concept of a protection envelope giving a wider class of behaviors than those strictly prescribed by a human task that can be tolerated by a system. We then provide a framework for determining whether a computer system can maintain its guarantees if the human operators operate within their protection envelopes. This framework also helps to determine the robustness of the computer system under weakening of the protection envelopes. In this regard, we present a tool called Tutela that assists in implementing the framework. We then examine the ability of a system to remain safe under broad classes of variations of the prescribed human task. We develop a framework for addressing two issues. The first issue is: given a human task specification and a protection envelope, will the protection envelope properties still hold under standard erroneous executions of that task by the human operators? In other words how robust is the protection envelope? The second issue is: in the absence of a protection envelope, can we approximate a protection envelope encompassing those standard erroneous human behaviors that can be safely endured by the system? We present an extension of Tutela that implements this framework. The two frameworks mentioned above use Concurrent Game Structures (CGS) as models for both computer systems and their human operators. However, there are some shortcomings of this formalism for our uses. We add incomplete information concepts in CGSs to achieve better modularity for the players. We introduce nondeterminism in both the transition system and strategies of players and in the modeling of human operators and computer systems. Nondeterministic action strategies for players in \emph{i}ncomplete information \emph{N}ondeterministic CGS (iNCGS) is a more precise formalism for modeling human behaviors exhibited while operating a computer system. We show how we can reason about a human behavior satisfying a guarantee by providing a semantics of Alternating Time Temporal Logic based on iNCGS player strategies. In a nutshell this dissertation provides formal methodology for modeling and analyzing system robustness against both expected and erroneous human operator behaviors.
Resumo:
The purpose of this work is to demonstrate and to assess a simple algorithm for automatic estimation of the most salient region in an image, that have possible application in computer vision. The algorithm uses the connection between color dissimilarities in the image and the image’s most salient region. The algorithm also avoids using image priors. Pixel dissimilarity is an informal function of the distance of a specific pixel’s color to other pixels’ colors in an image. We examine the relation between pixel color dissimilarity and salient region detection on the MSRA1K image dataset. We propose a simple algorithm for salient region detection through random pixel color dissimilarity. We define dissimilarity by accumulating the distance between each pixel and a sample of n other random pixels, in the CIELAB color space. An important result is that random dissimilarity between each pixel and just another pixel (n = 1) is enough to create adequate saliency maps when combined with median filter, with competitive average performance if compared with other related methods in the saliency detection research field. The assessment was performed by means of precision-recall curves. This idea is inspired on the human attention mechanism that is able to choose few specific regions to focus on, a biological system that the computer vision community aims to emulate. We also review some of the history on this topic of selective attention.
Resumo:
This document presents GEmSysC, an unified cryptographic API for embedded systems. Software layers implementing this API can be built over existing libraries, allowing embedded software to access cryptographic functions in a consistent way that does not depend on the underlying library. The API complies to good practices for API design and good practices for embedded software development and took its inspiration from other cryptographic libraries and standards. The main inspiration for creating GEmSysC was the CMSIS-RTOS standard, which defines an unified API for embedded software in an implementation-independent way, but targets operating systems instead of cryptographic functions. GEmSysC is made of a generic core and attachable modules, one for each cryptographic algorithm. This document contains the specification of the core of GEmSysC and three of its modules: AES, RSA and SHA-256. GEmSysC was built targeting embedded systems, but this does not restrict its use only in such systems – after all, embedded systems are just very limited computing devices. As a proof of concept, two implementations of GEmSysC were made. One of them was built over wolfSSL, which is an open source library for embedded systems. The other was built over OpenSSL, which is open source and a de facto standard. Unlike wolfSSL, OpenSSL does not specifically target embedded systems. The implementation built over wolfSSL was evaluated in a Cortex- M3 processor with no operating system while the implementation built over OpenSSL was evaluated on a personal computer with Windows 10 operating system. This document displays test results showing GEmSysC to be simpler than other libraries in some aspects. These results have shown that both implementations incur in little overhead in computation time compared to the cryptographic libraries themselves. The overhead of the implementation has been measured for each cryptographic algorithm and is between around 0% and 0.17% for the implementation over wolfSSL and between 0.03% and 1.40% for the one over OpenSSL. This document also presents the memory costs for each implementation.
Resumo:
Réalisé en cotutelle avec l'École normale supérieure de Cachan – Université Paris-Saclay
Resumo:
Los mercados asociados a los servicios de voz móvil a móvil, brindados por operadoras del Sistema Móvil Avanzado en Latinoamérica, han estado sujetos a procesos regulatorios motivados por la dominancia en el mercado de un operador, buscando obtener óptimas condiciones de competencia. Específicamente en Ecuador, la Superintendencia de Telecomunicaciones (Organismo Técnico de Control de Telecomunicaciones) desarrolló un modelo para identificar acciones de regulación que puedan proporcionar al mercado efectos sostenibles de competencia en el largo plazo. Este artículo trata sobre la aplicación de la ingeniería de control para desarrollar un modelo integral del mercado, empleando redes neuronales para la predicción de trarifas de cada operador y un modelo de lógica difusa para predecir la demanda. Adicionalmente, se presenta un modelo de inferencia de lógica difusa para reproducir las estrategias de mercadeo de los operadores y la influencia sobre las tarifas. Dichos modelos permitirían la toma adecuada de decisiones y fueron validados con datos reales.
Resumo:
Semantics, knowledge and Grids represent three spaces where people interact, understand, learn and create. Grids represent the advanced cyber-infrastructures and evolution. Big data influence the evolution of semantics, knowledge and Grids. Exploring semantics, knowledge and Grids on big data helps accelerate the shift of scientific paradigm, the fourth industrial revolution, and the transformational innovation of technologies.