907 resultados para MDA (Model driven architecture)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Softeam has over 20 years of experience providing UML-based modelling solutions, such as its Modelio modelling tool, and its Constellation enterprise model management and collaboration environment. Due to the increasing number and size of the models used by Softeam’s clients, Softeam joined the MONDO FP7 EU research project, which worked on solutions for these scalability challenges and produced the Hawk model indexer among other results. This paper presents the technical details and several case studies on the integration of Hawk into Softeam’s toolset. The first case study measured the performance of Hawk’s Modelio support using varying amounts of memory for the Neo4j backend. In another case study, Hawk was integrated into Constellation to provide scalable global querying of model repositories. Finally, the combination of Hawk and the Epsilon Generation Language was compared against Modelio for document generation: for the largest model, Hawk was two orders of magnitude faster.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the current challenges in model-driven engineering is enabling effective collaborative modelling. Two common approaches are either storing the models in a central repository, or keeping them under a traditional file-based version control system and build a centralized index for model-wide queries. Either way, special attention must be paid to the nature of these repositories and indexes as networked services: they should remain responsive even with an increasing number of concurrent clients. This paper presents an empirical study on the impact of certain key decisions on the scalability of concurrent model queries, using an Eclipse Connected Data Objects model repository and a Hawk model index. The study evaluates the impact of the network protocol, the API design and the internal caching mechanisms and analyzes the reasons for their varying performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Kern der vorliegenden Arbeit ist die Modellierung komplexer Webapplikationen mit dem Story-Driven-Modeling Ansatz.Ziel ist es hierbei,die komplette Applikation allein durch die Spezifikation von Modellen zu entwickeln. Das händische Erstellen von Quelltext ist nicht notwendig. Die vorliegende Arbeit zeigt sowohl den Forschungsweg, der die angestrebte Modellierung von Webapplikationen ermöglicht, als auch die resultierenden Ergebnisse auf. Zur Unterstützung des Entwicklungsprozesses wird weiterhin ein modellgetriebener Softwareentwicklungsprozess vorgestellt, der die Modellierung einer Webapplikation von der Aufnahme der Anforderungen, bis zur abschließenden Erzeugung des Quelltextes durch Codegenerierung aus den spezifizierten Modellen, abdeckt. Für den definierten Prozess wird ferner Werkzeugunterstützung innerhalb der Fujaba Toolsuite bereitgestellt. Im Rahmen der vorliegenden Arbeit wurde die bestehede Toolsuite hierzu um alle zur Prozessunterstützung notwendigen Werkzeuge erweitert. Darüber hinaus wurden im Rahmen der vorliegenden Arbeit die in Fujaba bestehenden Werkzeuge erweitert, um neben den klassischen Möglichkeiten zur Modellierung komplexer Java-Applikationen auch die Erzeugung von Webapplikationen zu ermöglichen. Neben der genauen Beschreibung des Entwicklungsprozesses werden im Rahmen dieser Arbeit die entstehenden Webapplikationen mit ihren spezifischen Eigenschaften genau beschrieben. Zur Erzeugung dieser Applikationen wird neben dem Entwicklungsprozess die Diagrammart der Workflowdiagramme eingeführt und beschrieben. Diese Diagramme dienen der Abbildung des intendierten Userworkflows der Applikation im Rahmen der Anforderungsanalyse und stellen im weiteren Entwicklungsverlauf ein dediziertes Entwicklungsartefakt dar. Basierend auf den Workflowdiagrammen werden sowohl die grafische Benutzerschnittstelle der Webapplikation beschrieben, als auch ein Laufzeitsystem initialisiert, welches basierend auf den im Workflowdiagramm abgebildeten Abläufen die Anwendung steuert. Dieses Laufzeitsystem wurde im Rahmen der vorliegenden Arbeit entwickelt und in der Prozessunterstützung verankert. Alle notwendigen Änderungen und Anpassungen und Erweiterungen an bereits bestehenden Teilen der Fujaba Toolsuite werden unter dem Aspekt der Erstellung clientseitiger Datenmodelle einer Webapplikation genau beschrieben und in Verbindung mit den zu erfüllenden Voraussetzungen erläutert. In diesem Zusammenhang wird ebenfalls beschrieben, wie Graphtransformationen zur Umsetzung von Businesslogik auf der Clientseite einer Webapplikation eingesetzt werden können und auf welche Weise Datenmodelländerungen zwischen unterschiedlichen Clients synchronisiert werden können. Insgesamt zeigt die vorliegende Arbeit einen Weg auf, den bestehenden Ansatz des StoryDriven-Modeling für die Erzeugung von Webapplikationen einzusetzen. Durch die im Rahmen dieser Arbeit beschriebene Herangehensweise werden hierbei gleichzeitig Webbrowser zu einer neuen Klasse von Graphersetzungs-Engines erweitert, indem Graphtransformationen innerhalb der Ajax-Engine des Browsers ausgeliefert und ausgeführt werden.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Business process modeling has undoubtedly emerged as a popular and relevant practice in Information Systems. Despite being an actively researched field, anecdotal evidence and experiences suggest that the focus of the research community is not always well aligned with the needs of industry. The main aim of this paper is, accordingly, to explore the current issues and the future challenges in business process modeling, as perceived by three key stakeholder groups (academics, practitioners, and tool vendors). We present the results of a global Delphi study with these three groups of stakeholders, and discuss the findings and their implications for research and practice. Our findings suggest that the critical areas of concern are standardization of modeling approaches, identification of the value proposition of business process modeling, and model-driven process execution. These areas are also expected to persist as business process modeling roadblocks in the future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Autonomous Underwater Vehicles (AUVs) are revolutionizing oceanography through their versatility, autonomy and endurance. However, they are still an underutilized technology. For coastal operations, the ability to track a certain feature is of interest to ocean scientists. Adaptive and predictive path planning requires frequent communication with significant data transfer. Currently, most AUVs rely on satellite phones as their primary communication. This communication protocol is expensive and slow. To reduce communication costs and provide adequate data transfer rates, we present a hardware modification along with a software system that provides an alternative robust disruption- tolerant communications framework enabling cost-effective glider operation in coastal regions. The framework is specifically designed to address multi-sensor deployments. We provide a system overview and present testing and coverage data for the network. Additionally, we include an application of ocean-model driven trajectory design, which can benefit from the use of this network and communication system. Simulation and implementation results are presented for single and multiple vehicle deployments. The presented combination of infrastructure, software development and deployment experience brings us closer to the goal of providing a reliable and cost-effective data transfer framework to enable real-time, optimal trajectory design, based on ocean model predictions, to gather in situ measurements of interesting and evolving ocean features and phenomena.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The management of models over time in many domains requires different constraints to apply to some parts of the model as it evolves. Using EMF and its meta-language Ecore, the development of model management code and tools usually relies on the meta- model having some constraints, such as attribute and reference cardinalities and changeability, set in the least constrained way that any model user will require. Stronger versions of these constraints can then be enforced in code, or by attaching additional constraint expressions, and their evaluations engines, to the generated model code. We propose a mechanism that allows for variations to the constraining meta-attributes of metamodels, to allow enforcement of different constraints at different lifecycle stages of a model. We then discuss the implementation choices within EMF to support the validation of a state-specific metamodel on model graphs when changing states, as well as the enforcement of state-specific constraints when executing model change operations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This chapter deals with technical aspects of how USDL service descriptions can be read from and written to different representations for use by humans and tools. A combination of techniques for representing and exchanging USDL have been drawn from Model-Driven Engineering and Semantic Web technologies. The USDL language's structural definition is specified as a MOF meta-model, but some modules were originally defined using the OWL language from the Semantic Web community and translated to the meta-model format. We begin with the important topic of serializing USDL descriptions into XML, so that they can be exchanged beween editors, repositories, and other tools. The following topic is how USDL can be made available through the Semantic Web as a network of linked data, connected via URIs. Finally, consideration is given to human-readable representations of USDL descriptions, and how they can be generated, in large part, from the contents of a stored USDL model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The potential impacts of extreme water level events on our coasts are increasing as populations grow and sea levels rise. To better prepare for the future, coastal engineers and managers need accurate estimates of average exceedance probabilities for extreme water levels. In this paper, we estimate present day probabilities of extreme water levels around the entire coastline of Australia. Tides and storm surges generated by extra-tropical storms were included by creating a 61-year (1949-2009) hindcast of water levels using a high resolution depth averaged hydrodynamic model driven with meteorological data from a global reanalysis. Tropical cyclone-induced surges were included through numerical modelling of a database of synthetic tropical cyclones equivalent to 10,000 years of cyclone activity around Australia. Predicted water level data was analysed using extreme value theory to construct return period curves for both the water level hindcast and synthetic tropical cyclone modelling. These return period curves were then combined by taking the highest water level at each return period.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In-memory databases have become a mainstay of enterprise computing offering significant performance and scalability boosts for online analytical and (to a lesser extent) transactional processing as well as improved prospects for integration across different applications through an efficient shared database layer. Significant research and development has been undertaken over several years concerning data management considerations of in-memory databases. However, limited insights are available on the impacts of applications and their supportive middleware platforms and how they need to evolve to fully function through, and leverage, in-memory database capabilities. This paper provides a first, comprehensive exposition into how in-memory databases impact Business Pro- cess Management, as a mission-critical and exemplary model-driven integration and orchestration middleware. Through it, we argue that in-memory databases will render some prevalent uses of legacy BPM middleware obsolete, but also open up exciting possibilities for tighter application integration, better process automation performance and some entirely new BPM capabilities such as process-based application customization. To validate the feasibility of an in-memory BPM, we develop a surprisingly simple BPM runtime embedded into SAP HANA and providing for BPMN-based process automation capabilities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Moose populations are managed for sustainable yield balanced against costs caused by damage to forestry or agriculture and collisions with vehicles. Optimal harvests can be calculated based on a structured population model driven by data on abundance and the composition of bulls, cows, and calves obtained by aerial-survey monitoring during winter. Quotas are established by the respective government agency and licenses are issued to hunters to harvest an animal of specified age or sex during the following autumn. Because the cost of aerial monitoring is high, we use a Management Strategy Evaluation to evaluate the costs and benefits of periodic aerial surveys in the context of moose management. Our on-the-fly "seat of your pants" alternative to independent monitoring is management based solely on the kill of moose by hunters, which is usually sufficient to alert the manager to declines in moose abundance that warrant adjustments to harvest strategies. Harvests are relatively cheap to monitor; therefore, data can be obtained each year facilitating annual adjustments to quotas. Other sources of "cheap" monitoring data such as records of the number of moose seen by hunters while hunting also might be obtained, and may provide further useful insight into population abundance, structure and health. Because conservation dollars are usually limited, the high cost of aerial surveys is difficult to justify when alternative methods exist. © 2012 Elsevier Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the main challenges in data analytics is that discovering structures and patterns in complex datasets is a computer-intensive task. Recent advances in high-performance computing provide part of the solution. Multicore systems are now more affordable and more accessible. In this paper, we investigate how this can be used to develop more advanced methods for data analytics. We focus on two specific areas: model-driven analysis and data mining using optimisation techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Five significant problems hinder advances in understanding of the volcanology of kimberlites: (1) kimberlite geology is very model driven; (2) a highly genetic terminology drives deposit or facies interpretation; (3) the effects of alteration on preserved depositional textures have been grossly underestimated; (4) the level of understanding of the physical process significance of preserved textures is limited; and, (5) some inferred processes and deposits are not based on actual, modern volcanological processes. These issues need to be addressed in order to advance understanding of kimberlite volcanological pipe forming processes and deposits. The traditional, steep-sided southern African pipe model (Class I) consists of a steep tapering pipe with a deep root zone, a middle diatreme zone and an upper crater zone (if preserved). Each zone is thought to be dominated by distinctive facies, respectively: hypabyssal kimberlite (HK, descriptively called here massive coherent porphyritic kimberlite), tuffisitic kimberlite breccia (TKB, descriptively here called massive, poorly sorted lapilli tuff) and crater zone facies, which include variably bedded pyroclastic kimberlite and resedimented and reworked volcaniclastic kimberlite (RVK). Porphyritic coherent kimberlite may, however, also be emplaced at different levels in the pipe, as later stage intrusions, as well as dykes in the surrounding country rock. The relationship between HK and TKB is not always clear. Sub-terranean fluidisation as an emplacement process is a largely unsubstantiated hypothesis; modern in-vent volcanological processes should initially be considered to explain observed deposits. Crater zone volcaniclastic deposits can occur within the diatreme zone of some pipes, indicating that the pipe was largely empty at the end of the eruption, and subsequently began to fill-in largely through resedimentation and sourcing of pyroclastic deposits from nearby vents. Classes II and III Canadian kimberlite models have a more factual, descriptive basis, but are still inadequately documented given the recency of their discovery. The diversity amongst kimberlite bodies suggests that a three-model classification is an over-simplification. Every kimberlite is altered to varying degrees, which is an intrinsic consequence of the ultrabasic composition of kimberlite and the in-vent context; few preserve original textures. The effects of syn- to post-emplacement alteration on original textures have not been adequately considered to date, and should be back-stripped to identify original textural elements and configurations. Applying sedimentological textural configurations as a guide to emplacement processes would be useful. The traditional terminology has many connotations about spatial position in pipe and of process. Perhaps the traditional terminology can be retained in the industrial situation as a general lithofacies-mining terminological scheme because it is so entrenched. However, for research purposes a more descriptive lithofacies terminology should be adopted to facilitate detailed understanding of deposit characteristics, important variations in these, and the process origins. For example every deposit of TKB is different in componentry, texture, or depositional structure. However, because so many deposits in many different pipes are called TKB, there is an implication that they are all similar and that similar processes were involved, which is far from clear.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pond apple invades riparian and coastal environments with water acting as the main vector for dispersal. As seeds float and can reach the ocean, a seed tracking model driven by near surface ocean currents was used to develop maps of potential seed dispersal. Seeds were ‘released’ in the model from sites near the mouths of major North Queensland rivers. Most seeds reach land within three months of release, settling predominately on windward-facing locations. During calm and monsoonal conditions, seeds were generally swept in a southerly direction, however movement turns northward during south easterly trade winds. Seeds released in February from the Johnstone River were capable of being moved anywhere from 100 km north to 150 km south depending on prevailing conditions. Although wind driven currents are the primary mechanism influencing seed dispersal, tidal currents, the East Australian Current, and other factors such as coastline orientation, release location and time also play an important role in determining dispersal patterns. In extreme events such as tropical cyclone Justin in 1997, north east coast rivers could potentially transport seed over 1300 km to the Torres Strait, Papua New Guinea and beyond.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new scheme for robust estimation of the partial state of linear time-invariant multivariable systems is presented, and it is shown how this may be used for the detection of sensor faults in such systems. We consider an observer to be robust if it generates a faithful estimate of the plant state in the face of modelling uncertainty or plant perturbations. Using the Stable Factorization approach we formulate the problem of optimal robust observer design by minimizing an appropriate norm on the estimation error. A logical candidate is the 2-norm, corresponding to an H�¿ optimization problem, for which solutions are readily available. In the special case of a stable plant, the optimal fault diagnosis scheme reduces to an internal model control architecture.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the design and implementation of PolyMage, a domain-specific language and compiler for image processing pipelines. An image processing pipeline can be viewed as a graph of interconnected stages which process images successively. Each stage typically performs one of point-wise, stencil, reduction or data-dependent operations on image pixels. Individual stages in a pipeline typically exhibit abundant data parallelism that can be exploited with relative ease. However, the stages also require high memory bandwidth preventing effective utilization of parallelism available on modern architectures. For applications that demand high performance, the traditional options are to use optimized libraries like OpenCV or to optimize manually. While using libraries precludes optimization across library routines, manual optimization accounting for both parallelism and locality is very tedious. The focus of our system, PolyMage, is on automatically generating high-performance implementations of image processing pipelines expressed in a high-level declarative language. Our optimization approach primarily relies on the transformation and code generation capabilities of the polyhedral compiler framework. To the best of our knowledge, this is the first model-driven compiler for image processing pipelines that performs complex fusion, tiling, and storage optimization automatically. Experimental results on a modern multicore system show that the performance achieved by our automatic approach is up to 1.81x better than that achieved through manual tuning in Halide, a state-of-the-art language and compiler for image processing pipelines. For a camera raw image processing pipeline, our performance is comparable to that of a hand-tuned implementation.