20 resultados para Domain-specific programming languages
Resumo:
Purpose – The purpose of this paper is to outline a seven-phase simulation conceptual modelling procedure that incorporates existing practice and embeds a process reference model (i.e. SCOR). Design/methodology/approach – An extensive review of the simulation and SCM literature identifies a set of requirements for a domain-specific conceptual modelling procedure. The associated design issues for each requirement are discussed and the utility of SCOR in the process of conceptual modelling is demonstrated using two development cases. Ten key concepts are synthesised and aligned to a general process for conceptual modelling. Further work is outlined to detail, refine and test the procedure with different process reference models in different industrial contexts. Findings - Simulation conceptual modelling is often regarded as the most important yet least understood aspect of a simulation project (Robinson, 2008a). Even today, there has been little research development into guidelines to aid in the creation of a conceptual model. Design issues are discussed for building an ‘effective’ conceptual model and the domain-specific requirements for modelling supply chains are addressed. The ten key concepts are incorporated to aid in describing the supply chain problem (i.e. components and relationships that need to be included in the model), model content (i.e. rules for determining the simplest model boundary and level of detail to implement the model) and model validation. Originality/value – Paper addresses Robinson (2008a) call for research in defining and developing new approaches for conceptual modelling and Manuj et al., (2009) discussion on improving the rigour of simulation studies in SCM. It is expected that more detailed guidelines will yield benefits to both expert (i.e. avert typical modelling failures) and novice modellers (i.e. guided practice; less reliance on hopeful intuition)
Resumo:
The potential for sharing environmental data and models is huge, but can be challenging for experts without specific programming expertise. We built an open-source, cross-platform framework (‘Tzar’) to run models across distributed machines. Tzar is simple to set up and use, allows dynamic parameter generation and enhances reproducibility by accessing versioned data and code. Combining Tzar with Docker helps us lower the entry barrier further by versioning and bundling all required modules and dependencies, together with the database needed to schedule work.
Resumo:
Due to dynamic variability, identifying the specific conditions under which non-functional requirements (NFRs) are satisfied may be only possible at runtime. Therefore, it is necessary to consider the dynamic treatment of relevant information during the requirements specifications. The associated data can be gathered by monitoring the execution of the application and its underlying environment to support reasoning about how the current application configuration is fulfilling the established requirements. This paper presents a dynamic decision-making infrastructure to support both NFRs representation and monitoring, and to reason about the degree of satisfaction of NFRs during runtime. The infrastructure is composed of: (i) an extended feature model aligned with a domain-specific language for representing NFRs to be monitored at runtime; (ii) a monitoring infrastructure to continuously assess NFRs at runtime; and (iii) a exible decision-making process to select the best available configuration based on the satisfaction degree of the NRFs. The evaluation of the approach has shown that it is able to choose application configurations that well fit user NFRs based on runtime information. The evaluation also revealed that the proposed infrastructure provided consistent indicators regarding the best application configurations that fit user NFRs. Finally, a benefit of our approach is that it allows us to quantify the level of satisfaction with respect to NFRs specification.
Resumo:
The sharing of product and process information plays a central role in coordinating supply chains operations and is a key driver for their success. "Linked pedigrees" - linked datasets, that encapsulate event based traceability information of artifacts as they move along the supply chain, provide a scalable mechanism to record and facilitate the sharing of track and trace knowledge among supply chain partners. In this paper we present "OntoPedigree" a content ontology design pattern for the representation of linked pedigrees, that can be specialised and extended to define domain specific traceability ontologies. Events captured within the pedigrees are specified using EPCIS - a GS1 standard for the specification of traceability information within and across enterprises, while certification information is described using PROV - a vocabulary for modelling provenance of resources. We exemplify the utility of OntoPedigree in linked pedigrees generated for supply chains within the perishable goods and pharmaceuticals sectors.
Resumo:
Some organizations end up reimplementing the same class of business process over and over: an "administrative process", which consists of managing a form through several states and involving various roles in the organization. This results in wasted time that could be dedicated to better understanding the process or dealing with the fine details that are specific to the process. Existing virtual office solutions require specific training and infrastructure andmay result in vendor lock-in. In this paper, we propose using a high-level domain-specific language (AdminDSL) to describe the administrative process and a separate code generator targeting a standard web framework. We have implemented the approach using Xtext, EGL and the Django web framework, and we illustrate it through two case studies: a synthetic examination process which illustrates the architecture of the generated code, and a real-world workplace survey process that identified several future avenues for improvement.