677 resultados para abstraction
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Dissertação para obtenção do Grau de Doutor em Engenharia Electrotécnica e de Computadores Especialidade: Robótica e Manufactura Integrada
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
The Graphics Processing Unit (GPU) is present in almost every modern day personal computer. Despite its specific purpose design, they have been increasingly used for general computations with very good results. Hence, there is a growing effort from the community to seamlessly integrate this kind of devices in everyday computing. However, to fully exploit the potential of a system comprising GPUs and CPUs, these devices should be presented to the programmer as a single platform. The efficient combination of the power of CPU and GPU devices is highly dependent on each device’s characteristics, resulting in platform specific applications that cannot be ported to different systems. Also, the most efficient work balance among devices is highly dependable on the computations to be performed and respective data sizes. In this work, we propose a solution for heterogeneous environments based on the abstraction level provided by algorithmic skeletons. Our goal is to take full advantage of the power of all CPU and GPU devices present in a system, without the need for different kernel implementations nor explicit work-distribution.To that end, we extended Marrow, an algorithmic skeleton framework for multi-GPUs, to support CPU computations and efficiently balance the work-load between devices. Our approach is based on an offline training execution that identifies the ideal work balance and platform configurations for a given application and input data size. The evaluation of this work shows that the combination of CPU and GPU devices can significantly boost the performance of our benchmarks in the tested environments, when compared to GPU-only executions.
Resumo:
The reported productivity gains while using models and model transformations to develop entire systems, after almost a decade of experience applying model-driven approaches for system development, are already undeniable benefits of this approach. However, the slowness of higher-level, rule based model transformation languages hinders the applicability of this approach to industrial scales. Lower-level, and efficient, languages can be used but productivity and easy maintenance seize to exist. The abstraction penalty problem is not new, it also exists for high-level, object oriented languages but everyone is using them now. Why is not everyone using rule based model transformation languages then? In this thesis, we propose a framework, comprised of a language and its respective environment, designed to tackle the most performance critical operation of high-level model transformation languages: the pattern matching. This framework shows that it is possible to mitigate the performance penalty while still using high-level model transformation languages.
Resumo:
Benefits of long-term monitoring have drawn considerable attention in healthcare. Since the acquired data provides an important source of information to clinicians and researchers, the choice for long-term monitoring studies has become frequent. However, long-term monitoring can result in massive datasets, which makes the analysis of the acquired biosignals a challenge. In this case, visualization, which is a key point in signal analysis, presents several limitations and the annotations handling in which some machine learning algorithms depend on, turn out to be a complex task. In order to overcome these problems a novel web-based application for biosignals visualization and annotation in a fast and user friendly way was developed. This was possible through the study and implementation of a visualization model. The main process of this model, the visualization process, comprised the constitution of the domain problem, the abstraction design, the development of a multilevel visualization and the study and choice of the visualization techniques that better communicate the information carried by the data. In a second process, the visual encoding variables were the study target. Finally, the improved interaction exploration techniques were implemented where the annotation handling stands out. Three case studies are presented and discussed and a usability study supports the reliability of the implemented work.
Resumo:
To cope with modernity, the interesting of having a fully automated house has been increasing over the years, as technology evolves and as our lives become more stressful and overloaded. An automation system provides a way to simplify some daily tasks, allowing us to have more spare time to perform activities where we are really needed. There are some systems in this domain that try to implement these characteristics, but this kind of technology is at its early stages of evolution being that it is still far away of empowering the user with the desired control over a habitation. The reason is that the mentioned systems miss some important features such as adaptability, extension and evolution. These systems, developed from a bottom-up approach, are often tailored for programmers and domain experts, discarding most of the times the end users that remain with unfinished interfaces or products that they have difficulty to control. Moreover, complex behaviors are avoided, since they are extremely difficult to implement mostly due to the necessity of handling priorities, conflicts and device calibration. Besides, these solutions are only reachable at very high costs, yet they still have the limitation of being difficult to configure by non-technical people once in runtime operation. As a result, it is necessary to create a tool that allows the execution of several automated actions, with an interface that is easy to use but at the same time supports all the main features of this domain. It is also desirable that this tool is independent of the hardware so it can be reused, thus a Model Driven Development approach (MDD) is the ideal option, as it is a method that follows those principles. Since the automation domain has some very specific concepts, the use of models should be combined with a Domain Specific Language (DSL). With these two methods, it is possible to create a solution that is adapted to the end users, but also to domain experts and programmers due to the several levels of abstraction that can be added to diminish the complexity of use. The aim of this thesis is to design a Domain Specific Language (DSL) that uses the Model Driven Development approach (MDD), with the purpose of supporting Home Automation (HA) concepts. In this implementation, the development of simple and complex scenarios should be supported and will be one of the most important concerns. This DSL should also support other significant features in this domain, such as the ability to schedule tasks, which is something that is limited in the current existing solutions.
Resumo:
In recent years a set of production paradigms were proposed in order to capacitate manufacturers to meet the new market requirements, such as the shift in demand for highly customized products resulting in a shorter product life cycle, rather than the traditional mass production standardized consumables. These new paradigms advocate solutions capable of facing these requirements, empowering manufacturing systems with a high capacity to adapt along with elevated flexibility and robustness in order to deal with disturbances, like unexpected orders or malfunctions. Evolvable Production Systems propose a solution based on the usage of modularity and self-organization with a fine granularity level, supporting pluggability and in this way allowing companies to add and/or remove components during execution without any extra re-programming effort. However, current monitoring software was not designed to fully support these characteristics, being commonly based on centralized SCADA systems, incapable of re-adapting during execution to the unexpected plugging/unplugging of devices nor changes in the entire system’s topology. Considering these aspects, the work developed for this thesis encompasses a fully distributed agent-based architecture, capable of performing knowledge extraction at different levels of abstraction without sacrificing the capacity to add and/or remove monitoring entities, responsible for data extraction and analysis, during runtime.
Resumo:
Mutable state can be useful in certain algorithms, to structure programs, or for efficiency purposes. However, when shared mutable state is used in non-local or nonobvious ways, the interactions that can occur via aliases to that shared memory can be a source of program errors. Undisciplined uses of shared state may unsafely interfere with local reasoning as other aliases may interleave their changes to the shared state in unexpected ways. We propose a novel technique, rely-guarantee protocols, that structures the interactions between aliases and ensures that only safe interference is possible. We present a linear type system outfitted with our novel sharing mechanism that enables controlled interference over shared mutable resources. Each alias is assigned separate, local roles encoded in a protocol abstraction that constrains how an alias can legally use that shared state. By following the spirit of rely-guarantee reasoning, our rely-guarantee protocols ensure that only safe interference can occur but still allow many interesting uses of shared state, such as going beyond invariant and monotonic usages. This thesis describes the three core mechanisms that enable our type-based technique to work: 1) we show how a protocol models an alias’s perspective on how the shared state evolves and constrains that alias’s interactions with the shared state; 2) we show how protocols can be used while enforcing the agreed interference contract; and finally, 3) we show how to check that all local protocols to some shared state can be safely composed to ensure globally safe interference over that shared memory. The interference caused by shared state is rooted at how the uses of di↵erent aliases to that state may be interleaved (perhaps even in non-deterministic ways) at run-time. Therefore, our technique is mostly agnostic as to whether this interference was the result of alias interleaving caused by sequential or concurrent semantics. We show implementations of our technique in both settings, and highlight their di↵erences. Because sharing is “first-class” (and not tied to a module), we show a polymorphic procedure that enables abstract compositions of protocols. Thus, protocols can be specialized or extended without requiring specific knowledge of the interference produce by other protocols to that state. We show that protocol composition can ensure safety even when considering abstracted protocols. We show that this core composition mechanism is sound, decidable (without the need for manual intervention), and provide an algorithm implementation.
Resumo:
Born in Armenia, the oldest Christian country in the world but nevertheless one of the youngest reinstated republics (1991) after the collapse of the Soviet Union, Arshile Gorky flew to the United States in 1920, where he chose to reinvent himself in the struggle to become an artist. This reinvention meant the creation of a persona with, or behind, which Gorky kept alive the artistic flame inside himself. Gorky became one of the most learned voices lecturing on contemporary European modernist artists and movements of the late nineteenth and early twentieth centuries in the United States (New York) without ever visiting Europe. Moreover, he was able to survive the traumatic events he underwent during the Armenian Genocide (1915-1919) to adapt to his new country and identity, to live through the years of the Depression and, eventually, to become the protagonistof a major artistic breakthrough. This paper proposes an insight into the experience of life and frame of work of this Armenian-American artist, whose simultaneously rich, traumatic, dislocated and reenacted life and work established one of the most fertile links between his middle‑eastern origins, his dreamed of Europe and the particular transit of his American artistic creation.
Resumo:
The MAP-i Doctoral Programme in Informatics, of the Universities of Minho, Aveiro and Porto
Resumo:
Tese de Doutoramento Plano Doutoral em Engenharia Eletrónica e de Computadores.
Resumo:
Dissertação de mestrado em Ciências – Formação Contínua de Professores (área de especialização em Física e Química)
Resumo:
In our work we have chosen to integrate formalism for knowledge representation with formalism for process representation as a way to specify and regulate the overall activity of a multi-cellular agent. The result of this approach is XP,N, another formalism, wherein a distributed system can be modeled as a collection of interrelated sub-nets sharing a common explicit control structure. Each sub-net represents a system of asynchronous concurrent threads modeled by a set of transitions. XP,N combines local state and control with interaction and hierarchy to achieve a high-level abstraction and to model the complex relationships between all the components of a distributed system. Viewed as a tool XP,N provides a carefully devised conflict resolution strategy that intentionally mimics the genetic regulatory mechanism used in an organic cell to select the next genes to process.