945 resultados para Multiple abstraction levels
Resumo:
Study Design. A case report describing chronic recurrent multifocal osteomyelitis (CRMO) with initial presentation limited to spine, successfully treated by anti-TNF-alpha therapy after failure of conventional treatment methods. Objective. To describe an unusual manifestation and treatment of a rare disease. Summary of Background Data. CRMO is a rare inflammatory bone disease that should be differentiated from bacterial osteomyelitis. Rarely, it can affect the spine and in this case the most important differential diagnosis is infectious spondylodiscitis. The disease has an unpredictable course with exacerbations and spontaneous remissions. Although the majority of cases remit spontaneously (or after the use of nonsteroidal anti-inflammatory drugs [NSAIDs]), some progressive and resistant cases have been reported. Methods. We describe a case of CRMO with an unusual clinical presentation emphasizing the importance of this finding as a differential diagnosis of spondylodiscitis and comment on the available treatment alternatives. Results. A 17-year-old man presented with debilitating dorsal spine pain. Magnetic resonance imaging of the spine revealed bone lesions at multiple vertebral levels. After failure of antibiotic treatment, the diagnosis of CRMO was suggested. An initial good response to NSAIDs was followed by a recurrent course and involvement of peripheral joints besides the use of corticosteroids and other drugs. The introduction of infliximab was followed by complete remission of the disease. Conclusion. Our observation highlights the need of awareness for the differential diagnosis in suspected cases of osteomyelitis not responding to antibiotics. Anti-TNF-alpha agents should be considered in CRMO refractory cases.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
Dissertação para obtenção do Grau de Doutor em Engenharia Informática
Resumo:
Black-blood MR coronary vessel wall imaging may become a powerful tool for the quantitative and noninvasive assessment of atherosclerosis and positive arterial remodeling. Although dual-inversion recovery is currently the gold standard, optimal lumen-to-vessel wall contrast is sometimes difficult to obtain, and the time window available for imaging is limited due to competing requirements between blood signal nulling time and period of minimal myocardial motion. Further, atherosclerosis is a spatially heterogeneous disease, and imaging at multiple anatomic levels of the coronary circulation is mandatory. However, this requirement of enhanced volumetric coverage comes at the expense of scanning time. Phase-sensitive inversion recovery has shown to be very valuable for enhancing tissue-tissue contrast and for making inversion recovery imaging less sensitive to tissue signal nulling time. This work enables multislice black-blood coronary vessel wall imaging in a single breath hold by extending phase-sensitive inversion recovery to phase-sensitive dual-inversion recovery, by combining it with spiral imaging and yet relaxing constraints related to blood signal nulling time and period of minimal myocardial motion.
Resumo:
Values and value processes are said to be needed in every organization nowadays, as the world is changing and companies have to have something to "keep it together". Organizational values, which are approvedand used by the personnel, could be the key. Every organization has values. But what is the real value of values? The greatest and most crucial challenge is the feasibility of the value process. The main point in this thesis is tostudy how organizational members at different hierarchical levels perceive values and value processes in their organizations. This includes themes such as how values are disseminated, the targets of value processing, factors that affect the process, problems that occur during the value implementation and improvements that could be made when organizational values are implemented. These subjects are studied from the perspective of organizational members (both managers and employees); individuals in the organizations. The aim is to get the insider-perspective on value processing, from multiple hierarchical levels. In this research I study three different organizations (forest industry, bank and retail cooperative) and their value processes. The data is gathered from companies interviewing personnel in the head office and at the local level. The individuals areseen as members of organizations, and the cultural aspect is topical throughout the whole study. Values and cultures are seen as the 'actuality of reality' of organizations, interpreted by organizational members. The three case companies were chosen because they represented different lines of business and they all implemented value processing differently. Sincethe emphasis in this study is at the local level, the similar size of the local units was also an important factor. Values are in 'fashion' -but what does the fashion tell us about the real corporate practices? In annual reports companies emphasize the importance and power of official values. But what is the real 'point' of values? Values are publicly respected and advertised, but still it seems that the words do not meet the deeds. There is a clear conflict between theoretical, official and substantive organizational values: in the value processing from words to real action. This contradiction in value processing is studied through individual perceptions in this study. I study the kinds of perceptions organizationalmembers have when values are processed from the head office to the local level: the official value process is studied from the individual's perspective. Value management has been studied more during the 1990's. The emphasis has usually been on managers: how they consider the values in organizations and what effects it has on the management. Recent literature has emphasized values as tools for improving company performance. The value implementation as a process has been studied through 'good' and 'bad' examples, as if one successful value process could be copied to all organizations. Each company is different with different cultures and personnel, so no all-powerful way of processing values exists. In this study, the organizational members' perceptions at different hierarchical levels are emphasized. Still, managers are also interviewed; this is done since managerial roles in value dissemination are crucial. Organizational values cannot be well disseminated without management; this has been proved in several earlier studies (e.g. Kunda 1992, Martin 1992, Parker 2000). Recent literature has not sufficiently emphasized the individual's (organizational member's) role in value processing. Organizations consist of differentindividuals with personal values, at all hierarchical levels. The aim in this study is to let the individual take the floor. Very often the value process is described starting from the value definition and ending at dissemination, and the real results are left without attention. I wish to contribute to this area. Values are published officially in annual reports etc. as a 'goal' just like profits. Still, the results/implementationof value processing is rarely followed, at least in official reports. This is a very interesting point: why do companies espouse values, if there is no real control or feedback after the processing? In this study, the personnel in three different companies is asked to give an answer. In the empirical findings, there are several results which bring new aspects to the research area of organizational values. The targets of value processing, factors effecting value processing, the management's roles and the problems in value implementation are presented through the individual's perspective. The individual's perceptions in value processing are a recurring theme throughout the whole study. A comparison between the three companies with diverse value processes makes the research complete
Resumo:
Technology scaling has proceeded into dimensions in which the reliability of manufactured devices is becoming endangered. The reliability decrease is a consequence of physical limitations, relative increase of variations, and decreasing noise margins, among others. A promising solution for bringing the reliability of circuits back to a desired level is the use of design methods which introduce tolerance against possible faults in an integrated circuit. This thesis studies and presents fault tolerance methods for network-onchip (NoC) which is a design paradigm targeted for very large systems-onchip. In a NoC resources, such as processors and memories, are connected to a communication network; comparable to the Internet. Fault tolerance in such a system can be achieved at many abstraction levels. The thesis studies the origin of faults in modern technologies and explains the classification to transient, intermittent and permanent faults. A survey of fault tolerance methods is presented to demonstrate the diversity of available methods. Networks-on-chip are approached by exploring their main design choices: the selection of a topology, routing protocol, and flow control method. Fault tolerance methods for NoCs are studied at different layers of the OSI reference model. The data link layer provides a reliable communication link over a physical channel. Error control coding is an efficient fault tolerance method especially against transient faults at this abstraction level. Error control coding methods suitable for on-chip communication are studied and their implementations presented. Error control coding loses its effectiveness in the presence of intermittent and permanent faults. Therefore, other solutions against them are presented. The introduction of spare wires and split transmissions are shown to provide good tolerance against intermittent and permanent errors and their combination to error control coding is illustrated. At the network layer positioned above the data link layer, fault tolerance can be achieved with the design of fault tolerant network topologies and routing algorithms. Both of these approaches are presented in the thesis together with realizations in the both categories. The thesis concludes that an optimal fault tolerance solution contains carefully co-designed elements from different abstraction levels
Resumo:
The design methods and languages targeted to modern System-on-Chip designs are facing tremendous pressure of the ever-increasing complexity, power, and speed requirements. To estimate any of these three metrics, there is a trade-off between accuracy and abstraction level of detail in which a system under design is analyzed. The more detailed the description, the more accurate the simulation will be, but, on the other hand, the more time consuming it will be. Moreover, a designer wants to make decisions as early as possible in the design flow to avoid costly design backtracking. To answer the challenges posed upon System-on-chip designs, this thesis introduces a formal, power aware framework, its development methods, and methods to constraint and analyze power consumption of the system under design. This thesis discusses on power analysis of synchronous and asynchronous systems not forgetting the communication aspects of these systems. The presented framework is built upon the Timed Action System formalism, which offer an environment to analyze and constraint the functional and temporal behavior of the system at high abstraction level. Furthermore, due to the complexity of System-on-Chip designs, the possibility to abstract unnecessary implementation details at higher abstraction levels is an essential part of the introduced design framework. With the encapsulation and abstraction techniques incorporated with the procedure based communication allows a designer to use the presented power aware framework in modeling these large scale systems. The introduced techniques also enable one to subdivide the development of communication and computation into own tasks. This property is taken into account in the power analysis part as well. Furthermore, the presented framework is developed in a way that it can be used throughout the design project. In other words, a designer is able to model and analyze systems from an abstract specification down to an implementable specification.
Resumo:
The capabilities and thus, design complexity of VLSI-based embedded systems have increased tremendously in recent years, riding the wave of Moore’s law. The time-to-market requirements are also shrinking, imposing challenges to the designers, which in turn, seek to adopt new design methods to increase their productivity. As an answer to these new pressures, modern day systems have moved towards on-chip multiprocessing technologies. New architectures have emerged in on-chip multiprocessing in order to utilize the tremendous advances of fabrication technology. Platform-based design is a possible solution in addressing these challenges. The principle behind the approach is to separate the functionality of an application from the organization and communication architecture of hardware platform at several levels of abstraction. The existing design methodologies pertaining to platform-based design approach don’t provide full automation at every level of the design processes, and sometimes, the co-design of platform-based systems lead to sub-optimal systems. In addition, the design productivity gap in multiprocessor systems remain a key challenge due to existing design methodologies. This thesis addresses the aforementioned challenges and discusses the creation of a development framework for a platform-based system design, in the context of the SegBus platform - a distributed communication architecture. This research aims to provide automated procedures for platform design and application mapping. Structural verification support is also featured thus ensuring correct-by-design platforms. The solution is based on a model-based process. Both the platform and the application are modeled using the Unified Modeling Language. This thesis develops a Domain Specific Language to support platform modeling based on a corresponding UML profile. Object Constraint Language constraints are used to support structurally correct platform construction. An emulator is thus introduced to allow as much as possible accurate performance estimation of the solution, at high abstraction levels. VHDL code is automatically generated, in the form of “snippets” to be employed in the arbiter modules of the platform, as required by the application. The resulting framework is applied in building an actual design solution for an MP3 stereo audio decoder application.
Resumo:
Les sociétés modernes dépendent de plus en plus sur les systèmes informatiques et ainsi, il y a de plus en plus de pression sur les équipes de développement pour produire des logiciels de bonne qualité. Plusieurs compagnies utilisent des modèles de qualité, des suites de programmes qui analysent et évaluent la qualité d'autres programmes, mais la construction de modèles de qualité est difficile parce qu'il existe plusieurs questions qui n'ont pas été répondues dans la littérature. Nous avons étudié les pratiques de modélisation de la qualité auprès d'une grande entreprise et avons identifié les trois dimensions où une recherche additionnelle est désirable : Le support de la subjectivité de la qualité, les techniques pour faire le suivi de la qualité lors de l'évolution des logiciels, et la composition de la qualité entre différents niveaux d'abstraction. Concernant la subjectivité, nous avons proposé l'utilisation de modèles bayésiens parce qu'ils sont capables de traiter des données ambiguës. Nous avons appliqué nos modèles au problème de la détection des défauts de conception. Dans une étude de deux logiciels libres, nous avons trouvé que notre approche est supérieure aux techniques décrites dans l'état de l'art, qui sont basées sur des règles. Pour supporter l'évolution des logiciels, nous avons considéré que les scores produits par un modèle de qualité sont des signaux qui peuvent être analysés en utilisant des techniques d'exploration de données pour identifier des patrons d'évolution de la qualité. Nous avons étudié comment les défauts de conception apparaissent et disparaissent des logiciels. Un logiciel est typiquement conçu comme une hiérarchie de composants, mais les modèles de qualité ne tiennent pas compte de cette organisation. Dans la dernière partie de la dissertation, nous présentons un modèle de qualité à deux niveaux. Ces modèles ont trois parties: un modèle au niveau du composant, un modèle qui évalue l'importance de chacun des composants, et un autre qui évalue la qualité d'un composé en combinant la qualité de ses composants. L'approche a été testée sur la prédiction de classes à fort changement à partir de la qualité des méthodes. Nous avons trouvé que nos modèles à deux niveaux permettent une meilleure identification des classes à fort changement. Pour terminer, nous avons appliqué nos modèles à deux niveaux pour l'évaluation de la navigabilité des sites web à partir de la qualité des pages. Nos modèles étaient capables de distinguer entre des sites de très bonne qualité et des sites choisis aléatoirement. Au cours de la dissertation, nous présentons non seulement des problèmes théoriques et leurs solutions, mais nous avons également mené des expériences pour démontrer les avantages et les limitations de nos solutions. Nos résultats indiquent qu'on peut espérer améliorer l'état de l'art dans les trois dimensions présentées. En particulier, notre travail sur la composition de la qualité et la modélisation de l'importance est le premier à cibler ce problème. Nous croyons que nos modèles à deux niveaux sont un point de départ intéressant pour des travaux de recherche plus approfondis.
Resumo:
La conception de systèmes hétérogènes exige deux étapes importantes, à savoir : la modélisation et la simulation. Habituellement, des simulateurs sont reliés et synchronisés en employant un bus de co-simulation. Les approches courantes ont beaucoup d’inconvénients : elles ne sont pas toujours adaptées aux environnements distribués, le temps d’exécution de simulation peut être très décevant, et chaque simulateur a son propre noyau de simulation. Nous proposons une nouvelle approche qui consiste au développement d’un simulateur compilé multi-langage où chaque modèle peut être décrit en employant différents langages de modélisation tel que SystemC, ESyS.Net ou autres. Chaque modèle contient généralement des modules et des moyens de communications entre eux. Les modules décrivent des fonctionnalités propres à un système souhaité. Leur description est réalisée en utilisant la programmation orientée objet et peut être décrite en utilisant une syntaxe que l’utilisateur aura choisie. Nous proposons ainsi une séparation entre le langage de modélisation et la simulation. Les modèles sont transformés en une même représentation interne qui pourrait être vue comme ensemble d’objets. Notre environnement compile les objets internes en produisant un code unifié au lieu d’utiliser plusieurs langages de modélisation qui ajoutent beaucoup de mécanismes de communications et des informations supplémentaires. Les optimisations peuvent inclure différents mécanismes tels que le regroupement des processus en un seul processus séquentiel tout en respectant la sémantique des modèles. Nous utiliserons deux niveaux d’abstraction soit le « register transfer level » (RTL) et le « transaction level modeling » (TLM). Le RTL permet une modélisation à bas niveau d’abstraction et la communication entre les modules se fait à l’aide de signaux et des signalisations. Le TLM est une modélisation d’une communication transactionnelle à un plus haut niveau d’abstraction. Notre objectif est de supporter ces deux types de simulation, mais en laissant à l’usager le choix du langage de modélisation. De même, nous proposons d’utiliser un seul noyau au lieu de plusieurs et d’enlever le bus de co-simulation pour accélérer le temps de simulation.
Resumo:
Vorgestellt wird eine weltweit neue Methode, Schnittstellen zwischen Menschen und Maschinen für individuelle Bediener anzupassen. Durch Anwenden von Abstraktionen evolutionärer Mechanismen wie Selektion, Rekombination und Mutation in der EOGUI-Methodik (Evolutionary Optimization of Graphical User Interfaces) kann eine rechnergestützte Umsetzung der Methode für Graphische Bedienoberflächen, insbesondere für industrielle Prozesse, bereitgestellt werden. In die Evolutionäre Optimierung fließen sowohl die objektiven, d.h. messbaren Größen wie Auswahlhäufigkeiten und -zeiten, mit ein, als auch das anhand von Online-Fragebögen erfasste subjektive Empfinden der Bediener. Auf diese Weise wird die Visualisierung von Systemen den Bedürfnissen und Präferenzen einzelner Bedienern angepasst. Im Rahmen dieser Arbeit kann der Bediener aus vier Bedienoberflächen unterschiedlicher Abstraktionsgrade für den Beispielprozess MIPS ( MIschungsProzess-Simulation) die Objekte auswählen, die ihn bei der Prozessführung am besten unterstützen. Über den EOGUI-Algorithmus werden diese Objekte ausgewählt, ggf. verändert und in einer neuen, dem Bediener angepassten graphischen Bedienoberfläche zusammengefasst. Unter Verwendung des MIPS-Prozesses wurden Experimente mit der EOGUI-Methodik durchgeführt, um die Anwendbarkeit, Akzeptanz und Wirksamkeit der Methode für die Führung industrieller Prozesse zu überprüfen. Anhand der Untersuchungen kann zu großen Teilen gezeigt werden, dass die entwickelte Methodik zur Evolutionären Optimierung von Mensch-Maschine-Schnittstellen industrielle Prozessvisualisierungen tatsächlich an den einzelnen Bediener anpaßt und die Prozessführung verbessert.
Resumo:
The development of effective environmental management plans and policies requires a sound understanding of the driving forces involved in shaping and altering the structure and function of ecosystems. However, driving forces, especially anthropogenic ones, are defined and operate at multiple administrative levels, which do not always match ecological scales. This paper presents an innovative methodology of analysing drivers of change by developing a typology of scale sensitivity of drivers that classifies and describes the way they operate across multiple administrative levels. Scale sensitivity varies considerably among drivers, which can be classified into five broad categories depending on the response of ‘evenness’ and ‘intensity change’ when moving across administrative levels. Indirect drivers tend to show low scale sensitivity, whereas direct drivers show high scale sensitivity, as they operate in a non-linear way across the administrative scale. Thus policies addressing direct drivers of change, in particular, need to take scale into consideration during their formulation. Moreover, such policies must have a strong spatial focus, which can be achieved either by encouraging local–regional policy making or by introducing high flexibility in (inter)national policies to accommodate increased differentiation at lower administrative levels. High quality data is available for several drivers, however, the availability of consistent data at all levels for non-anthropogenic drivers is a major constraint to mapping and assessing their scale sensitivity. This lack of data may hinder effective policy making for environmental management, since it restricts the ability to fully account for scale sensitivity of natural drivers in policy design.
Resumo:
The development of effective environmental management plans and policies requires a sound understanding of the driving forces involved in shaping and altering the structure and function of ecosystems. However, driving forces, especially anthropogenic ones, are defined and operate at multiple administrative levels, which do not always match ecological scales. This paper presents an innovative methodology of analysing drivers of change by developing a typology of scale sensitivity of drivers that classifies and describes the way they operate across multiple administrative levels. Scale sensitivity varies considerably among drivers, which can be classified into five broad categories depending on the response of ‘evenness’ and ‘intensity change’ when moving across administrative levels. Indirect drivers tend to show low scale sensitivity, whereas direct drivers show high scale sensitivity, as they operate in a non-linear way across the administrative scale. Thus policies addressing direct drivers of change, in particular, need to take scale into consideration during their formulation. Moreover, such policies must have a strong spatial focus, which can be achieved either by encouraging local–regional policy making or by introducing high flexibility in (inter)national policies to accommodate increased differentiation at lower administrative levels. High quality data is available for several drivers, however, the availability of consistent data at all levels for non-anthropogenic drivers is a major constraint to mapping and assessing their scale sensitivity. This lack of data may hinder effective policy making for environmental management, since it restricts the ability to fully account for scale sensitivity of natural drivers in policy design.
Resumo:
With hardware and software technologies advance, it s also happenning modifications in the development models of computational systems. New methodologies for user interface specification are being created with user interface description languages (UIDL). The UIDLs are a way to have a precise description in a language with more abstraction and independent of how will be implemented. A great problem is that even using these nowadays methodologies, we still have a big distance between the UIDLs and its design, what means, the distance between abstract and concrete. The tool BRIDGE (Interface Design Generator Environment) was created with the intention of being a linking bridge between a specification language (the Interactive Message Modeling Language IMML) and its implementation in Java, linking the abstract (specification) to the concrete (implementation). IMML is a language based on models, that allows the designer works in distinct abstraction levels, being each model a distinct abstraction level. IMML is a XML language, that uses the Semiotic Engineering concepts, that deals the computational system, with the user interface and its elements like a metacommunicative artifact, where these elements must to transmit a message to the user about what task must to be realized and the way to reach this goal. With BRIDGE, we intend to supply a lot of support to the design task, being the user interface prototipation the greater of them. BRIDGE allows the design becomes easier and more intuitive coming from an interface specification language
Resumo:
Aspect-Oriented Software Development (AOSD) is a technique that complements the Object- Oriented Software Development (OOSD) modularizing several concepts that OOSD approaches do not modularize appropriately. However, the current state-of-the art on AOSD suffers with software evolution, mainly because aspect definition can stop to work correctly when base elements evolve. A promising approach to deal with that problem is the definition of model-based pointcuts, where pointcuts are defined based on a conceptual model. That strategy makes pointcut less prone to software evolution than model-base elements. Based on that strategy, this work defines a conceptual model at high abstraction level where we can specify software patterns and architectures that through Model Driven Development techniques they can be instantiated and composed in architecture description language that allows aspect modeling at architecture level. Our MDD approach allows propagate concepts in architecture level to another abstraction levels (design level, for example) through MDA transformation rules. Also, this work shows a plug-in implemented to Eclipse platform called AOADLwithCM. That plug-in was created to support our development process. The AOADLwithCM plug-in was used to describe a case study based on MobileMedia System. MobileMedia case study shows step-by-step how the Conceptual Model approach could minimize Pointcut Fragile Problems, due to software evolution. MobileMedia case study was used as input to analyses evolutions on software according to software metrics proposed by KHATCHADOURIAN, GREENWOOD and RASHID. Also, we analyze how evolution in base model could affect maintenance on aspectual model with and without Conceptual Model approaches