917 resultados para Model Driven Architecture (MDA)
Resumo:
The design and implementation of an ERP system involves capturing the information necessary for implementing the system's structure and behavior that support enterprise management. This process should start on the enterprise modeling level and finish at the coding level, going down through different abstraction layers. For the case of Free/Open Source ERP, the lack of proper modeling methods and tools jeopardizes the advantages of source code availability. Moreover, the distributed, decentralized decision-making, and source-code driven development culture of open source communities, generally doesn't rely on methods for modeling the higher abstraction levels necessary for an ERP solution. The aim of this paper is to present a model driven development process for the open source ERP ERP5. The proposed process covers the different abstraction levels involved, taking into account well established standards and common practices, as well as new approaches, by supplying Enterprise, Requirements, Analysis, Design, and Implementation workflows. Copyright 2008 ACM.
Resumo:
We investigate the formation of molecules under the action of external field acting during the atomic collision. To describe this process, the collision of atomic pairs, we use the Morse oscillator model driven The study was developed from the standpoint of classical mechanics by analyzing the sensitivity of the system with respect to initial conditions, the verification of chaotic dynamics associated with the process of formation of molecules with laser and analysis of system dynamics and the likelihood of photoassociation in response to the external field parameters
Resumo:
The Software Engineering originated with the motivation to mass produce components for increased productivity in production systems. Since its origins, numerous studies have been proposed on the subject as new features in the creation of systems, like the Object- Oriented Programming and Aspect-Oriented Programming, have been established and methodologies have been developed to control them efficiently. However, years of studies in the area were not sufficient to create a methodology for reusing software artifacts really efficient and easy enough to be widespread. Given this, the Model-Driven Development (MDD) is trying to promote it using the modeling of systems as a reference, becoming part of it and establishing a huge productivity gain. One of his approaches is called Model-Driven Software Development (MDSD), which focuses on improving the practices and systems development using Domain-Specific Languages (DSL) for this purpose. In this Final Paper, Xtext is used as a tool to prove the productivity and efficiency of this approach, and for that bibliographic studies were made on the approach and the tool, and show the methodology and a case study to demonstrate results and conclusions regarding this work
Resumo:
Each plasma physics laboratory has a proprietary scheme to control and data acquisition system. Usually, it is different from one laboratory to another. It means that each laboratory has its own way to control the experiment and retrieving data from the database. Fusion research relies to a great extent on international collaboration and this private system makes it difficult to follow the work remotely. The TCABR data analysis and acquisition system has been upgraded to support a joint research programme using remote participation technologies. The choice of MDSplus (Model Driven System plus) is proved by the fact that it is widely utilized, and the scientists from different institutions may use the same system in different experiments in different tokamaks without the need to know how each system treats its acquisition system and data analysis. Another important point is the fact that the MDSplus has a library system that allows communication between different types of language (JAVA, Fortran, C, C++, Python) and programs such as MATLAB, IDL, OCTAVE. In the case of tokamak TCABR interfaces (object of this paper) between the system already in use and MDSplus were developed, instead of using the MDSplus at all stages, from the control, and data acquisition to the data analysis. This was done in the way to preserve a complex system already in operation and otherwise it would take a long time to migrate. This implementation also allows add new components using the MDSplus fully at all stages. (c) 2012 Elsevier B.V. All rights reserved.
Resumo:
Máster Universitario en Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)
Resumo:
Máster Universitario en Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)
Resumo:
Cost, performance and availability considerations are forcing even the most conservative high-integrity embedded real-time systems industry to migrate from simple hardware processors to ones equipped with caches and other acceleration features. This migration disrupts the practices and solutions that industry had developed and consolidated over the years to perform timing analysis. Industry that are confident with the efficiency/effectiveness of their verification and validation processes for old-generation processors, do not have sufficient insight on the effects of the migration to cache-equipped processors. Caches are perceived as an additional source of complexity, which has potential for shattering the guarantees of cost- and schedule-constrained qualification of their systems. The current industrial approach to timing analysis is ill-equipped to cope with the variability incurred by caches. Conversely, the application of advanced WCET analysis techniques on real-world industrial software, developed without analysability in mind, is hardly feasible. We propose a development approach aimed at minimising the cache jitters, as well as at enabling the application of advanced WCET analysis techniques to industrial systems. Our approach builds on:(i) identification of those software constructs that may impede or complicate timing analysis in industrial-scale systems; (ii) elaboration of practical means, under the model-driven engineering (MDE) paradigm, to enforce the automated generation of software that is analyzable by construction; (iii) implementation of a layout optimisation method to remove cache jitters stemming from the software layout in memory, with the intent of facilitating incremental software development, which is of high strategic interest to industry. The integration of those constituents in a structured approach to timing analysis achieves two interesting properties: the resulting software is analysable from the earliest releases onwards - as opposed to becoming so only when the system is final - and more easily amenable to advanced timing analysis by construction, regardless of the system scale and complexity.
Resumo:
Da quando è iniziata l'era del Cloud Computing molte cose sono cambiate, ora è possibile ottenere un server in tempo reale e usare strumenti automatizzati per installarvi applicazioni. In questa tesi verrà descritto lo strumento MODDE (Model-Driven Deployment Engine), usato per il deployment automatico, partendo dal linguaggio ABS. ABS è un linguaggio a oggetti che permette di descrivere le classi in una maniera astratta. Ogni componente dichiarato in questo linguaggio ha dei valori e delle dipendenze. Poi si procede alla descrizione del linguaggio di specifica DDLang, col quale vengono espressi tutti i vincoli e le configurazioni finali. In seguito viene spiegata l’architettura di MODDE. Esso usa degli script che integrano i tool Zephyrus e Metis e crea un main ABS dai tre file passati in input, che serve per effettuare l’allocazione delle macchine in un Cloud. Inoltre verranno introdotti i due sotto-strumenti usati da MODDE: Zephyrus e Metis. Il primo si occupa di scegliere quali servizi installare tenendo conto di tutte le loro dipendenze, cercando di ottimizzare il risultato. Il secondo gestisce l’ordine con cui installarli tenendo conto dei loro stati interni e delle dipendenze. Con la collaborazione di questi componenti si ottiene una installazione automatica piuttosto efficace. Infine dopo aver spiegato il funzionamento di MODDE viene spiegato come integrarlo in un servizio web per renderlo disponibile agli utenti. Esso viene installato su un server HTTP Apache all’interno di un container di Docker.
Resumo:
A feature represents a functional requirement fulfilled by a system. Since many maintenance tasks are expressed in terms of features, it is important to establish the correspondence between a feature and its implementation in source code. Traditional approaches to establish this correspondence exercise features to generate a trace of runtime events, which is then processed by post-mortem analysis. These approaches typically generate large amounts of data to analyze. Due to their static nature, these approaches do not support incremental and interactive analysis of features. We propose a radically different approach called live feature analysis, which provides a model at runtime of features. Our approach analyzes features on a running system and also makes it possible to grow feature representations by exercising different scenarios of the same feature, and identifies execution elements even to the sub-method level. We describe how live feature analysis is implemented effectively by annotating structural representations of code based on abstract syntax trees. We illustrate our live analysis with a case study where we achieve a more complete feature representation by exercising and merging variants of feature behavior and demonstrate the efficiency or our technique with benchmarks.
Resumo:
Software must be constantly adapted due to evolving domain knowledge and unanticipated requirements changes. To adapt a system at run-time we need to reflect on its structure and its behavior. Object-oriented languages introduced reflection to deal with this issue, however, no reflective approach up to now has tried to provide a unified solution to both structural and behavioral reflection. This paper describes Albedo, a unified approach to structural and behavioral reflection. Albedo is a model of fined-grained unanticipated dynamic structural and behavioral adaptation. Instead of providing reflective capabilities as an external mechanism we integrate them deeply in the environment. We show how explicit meta-objects allow us to provide a range of reflective features and thereby evolve both application models and environments at run-time.
Resumo:
Object-oriented meta-languages such as MOF or EMOF are often used to specify domain specific languages. However, these meta-languages lack the ability to describe behavior or operational semantics. Several approaches used a subset of Java mixed with OCL as executable meta-languages. In this paper, we report our experience of using Smalltalk as an executable and integrated meta-language. We validated this approach in incrementally building over the last decade, Moose, a meta-described reengineering environment. The reflective capabilities of Smalltalk support a uniform way of letting the base developer focus on his tasks while at the same time allowing him to meta-describe his domain model. The advantage of our this approach is that the developer uses the same tools and environment
Resumo:
Much of the knowledge about software systems is implicit, and therefore difficult to recover by purely automated techniques. Architectural layers and the externally visible features of software systems are two examples of information that can be difficult to detect from source code alone, and that would benefit from additional human knowledge. Typical approaches to reasoning about data involve encoding an explicit meta-model and expressing analyses at that level. Due to its informal nature, however, human knowledge can be difficult to characterize up-front and integrate into such a meta-model. We propose a generic, annotation-based approach to capture such knowledge during the reverse engineering process. Annotation types can be iteratively defined, refined and transformed, without requiring a fixed meta-model to be defined in advance. We show how our approach supports reverse engineering by implementing it in a tool called Metanool and by applying it to (i) analyzing architectural layering, (ii) tracking reengineering tasks, (iii) detecting design flaws, and (iv) analyzing features.
Resumo:
Object-oriented modelling languages such as EMOF are often used to specify domain specific meta-models. However, these modelling languages lack the ability to describe behavior or operational semantics. Several approaches have used a subset of Java mixed with OCL as executable meta-languages. In this experience report we show how we use Smalltalk as an executable meta-language in the context of the Moose reengineering environment. We present how we implemented EMOF and its behavioral aspects. Over the last decade we validated this approach through incrementally building a meta-described reengineering environment. Such an approach bridges the gap between a code-oriented view and a meta-model driven one. It avoids the creation of yet another language and reuses the infrastructure and run-time of the underlying implementation language. It offers an uniform way of letting developers focus on their tasks while at the same time allowing them to meta-describe their domain model. The advantage of our approach is that developers use the same tools and environment they use for their regular tasks. Still the approach is not Smalltalk specific but can be applied to language offering an introspective API such as Ruby, Python, CLOS, Java and C#.
Resumo:
wo methods for registering laser-scans of human heads and transforming them to a new semantically consistent topology defined by a user-provided template mesh are described. Both algorithms are stated within the Iterative Closest Point framework. The first method is based on finding landmark correspondences by iteratively registering the vicinity of a landmark with a re-weighted error function. Thin-plate spline interpolation is then used to deform the template mesh and finally the scan is resampled in the topology of the deformed template. The second algorithm employs a morphable shape model, which can be computed from a database of laser-scans using the first algorithm. It directly optimizes pose and shape of the morphable model. The use of the algorithm with PCA mixture models, where the shape is split up into regions each described by an individual subspace, is addressed. Mixture models require either blending or regularization strategies, both of which are described in detail. For both algorithms, strategies for filling in missing geometry for incomplete laser-scans are described. While an interpolation-based approach can be used to fill in small or smooth regions, the model-driven algorithm is capable of fitting a plausible complete head mesh to arbitrarily small geometry, which is known as "shape completion". The importance of regularization in the case of extreme shape completion is shown.