940 resultados para generic model
Resumo:
Software engineering researchers are challenged to provide increasingly more pow- erful levels of abstractions to address the rising complexity inherent in software solu- tions. One new development paradigm that places models as abstraction at the fore- front of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code. Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process. The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources. At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM’s synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise. This dissertation investigates how to decouple the DSK from the MoE and sub- sequently producing a generic model of execution (GMoE) from the remaining appli- cation logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis com- ponent of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions. This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.
Resumo:
Practitioners and academics have developed numerous maturity models for many domains in order to measure competency. These initiatives have often been influenced by the Capability Maturity Model. However, an accumulative effort has not been made to generalize the phases of developing a maturity model in any domain. This paper proposes such a methodology and outlines the main phases of generic model development. The proposed methodology is illustrated with the help of examples from two advanced maturity models in the domains of Business Process Management and Knowledge Management.
Resumo:
The proliferation of innovative schemes to address climate change at international, national and local levels signals a fundamental shift in the priority and role of the natural environment to society, organizations and individuals. This shift in shared priorities invites academics and practitioners to consider the role of institutions in shaping and constraining responses to climate change at multiple levels of organisations and society. Institutional theory provides an approach to conceptualising and addressing climate change challenges by focusing on the central logics that guide society, organizations and individuals and their material and symbolic relationship to the environment. For example, framing a response to climate change in the form of an emission trading scheme evidences a practice informed by a capitalist market logic (Friedland and Alford 1991). However, not all responses need necessarily align with a market logic. Indeed, Thornton (2004) identifies six broad societal sectors each with its own logic (markets, corporations, professions, states, families, religions). Hence, understanding the logics that underpin successful –and unsuccessful– climate change initiatives contributes to revealing how institutions shape and constrain practices, and provides valuable insights for policy makers and organizations. This paper develops models and propositions to consider the construction of, and challenges to, climate change initiatives based on institutional logics (Thornton and Ocasio 2008). We propose that the challenge of understanding and explaining how climate change initiatives are successfully adopted be examined in terms of their institutional logics, and how these logics evolve over time. To achieve this, a multi-level framework of analysis that encompasses society, organizations and individuals is necessary (Friedland and Alford 1991). However, to date most extant studies of institutional logics have tended to emphasize one level over the others (Thornton and Ocasio 2008: 104). In addition, existing studies related to climate change initiatives have largely been descriptive (e.g. Braun 2008) or prescriptive (e.g. Boiral 2006) in terms of the suitability of particular practices. This paper contributes to the literature on logics by examining multiple levels: the proliferation of the climate change agenda provides a site in which to study how institutional logics are played out across multiple, yet embedded levels within society through institutional forums in which change takes place. Secondly, the paper specifically examines how institutional logics provide society with organising principles –material practices and symbolic constructions– which enable and constrain their actions and help define their motives and identity. Based on this model, we develop a series of propositions of the conditions required for the successful introduction of climate change initiatives. The paper proceeds as follows. We present a review of literature related to institutional logics and develop a generic model of the process of the operation of institutional logics. We then consider how this is applied to key initiatives related to climate change. Finally, we develop a series of propositions which might guide insights into the successful implementation of climate change practices.
Resumo:
This paper presents a key based generic model for digital image watermarking. The model aims at addressing an identified gap in the literature by providing a basis for assessing different watermarking requirements in various digital image applications. We start with a formulation of a basic watermarking system, and define system inputs and outputs. We then proceed to incorporate the use of keys in the design of various system components. Using the model, we also define a few fundamental design and evaluation parameters. To demonstrate the significance of the proposed model, we provide an example of how it can be applied to formally define common attacks.
Resumo:
Incorporating knowledge based urban development (KBUD) strategies in the urban planning and development process is a challenging and complex task due to the fragmented and incoherent nature of the existing KBUD models. This paper scrutinizes and compares these KBUD models with an aim of identifying key and common features that help in developing a new comprehensive and integrated KBUD model. The features and characteristics of the existing KBUD models are determined through a thorough literature review and the analysis reveals that while these models are invaluable and useful in some cases, lack of a comprehensive perspective and absence of full integration of all necessary development domains render them incomplete as a generic model. The proposed KBUD model considers all central elements of urban development and sets an effective platform for planners and developers to achieve more holistic development outcomes. The proposed model, when developed further, has a high potential to support researchers, practitioners and particularly city and state administrations that are aiming to a knowledge-based development.
Resumo:
While formal definitions and security proofs are well established in some fields like cryptography and steganography, they are not as evident in digital watermarking research. A systematic development of watermarking schemes is desirable, but at present their development is usually informal, ad hoc, and omits the complete realization of application scenarios. This practice not only hinders the choice and use of a suitable scheme for a watermarking application, but also leads to debate about the state-of-the-art for different watermarking applications. With a view to the systematic development of watermarking schemes, we present a formal generic model for digital image watermarking. Considering possible inputs, outputs, and component functions, the initial construction of a basic watermarking model is developed further to incorporate the use of keys. On the basis of our proposed model, fundamental watermarking properties are defined and their importance exemplified for different image applications. We also define a set of possible attacks using our model showing different winning scenarios depending on the adversary capabilities. It is envisaged that with a proper consideration of watermarking properties and adversary actions in different image applications, use of the proposed model would allow a unified treatment of all practically meaningful variants of watermarking schemes.
Resumo:
A dynamic distributed model is presented that reproduces the dynamics of a wide range of varied battle scenarios with a general and abstract representation. The model illustrates the rich dynamic behavior that can be achieved from a simple generic model.
Resumo:
A method for the construction of a patient-specific model of a scoliotic torso for surgical planning via inter- patient registration is presented. Magnetic Resonance Images (MRI) of a generic model are registered to surface topography (TP) and X-ray data of a test patient. A partial model is first obtained via thin-plate spline registration between TP and X-ray data of the test patient. The MRIs from the generic model are then fit into the test patient using articulated model registration between the vertebrae of the generic model’s MRIs in prone position and the test patient’s X-rays in standing position. A non-rigid deformation of the soft tissues is performed using a modified thin-plate spline constrained to maintain bone rigidity and to fit in the space between the vertebrae and the surface of the torso. Results show average Dice values of 0.975 ± 0.012 between the MRIs following inter-patient registration and the surface topography of the test patient, which is comparable to the average value of 0.976 ± 0.009 previously obtained following intra-patient registration. The results also show a significant improvement compared to rigid inter-patient registration. Future work includes validating the method on a larger cohort of patients and incorporating soft tissue stiffness constraints. The method developed can be used to obtain a geometric model of a patient including bone structures, soft tissues and the surface of the torso which can be incorporated in a surgical simulator in order to better predict the outcome of scoliosis surgery, even if MRI data cannot be acquired for the patient.
Resumo:
Behavior-based navigation of autonomous vehicles requires the recognition of the navigable areas and the potential obstacles. In this paper we describe a model-based objects recognition system which is part of an image interpretation system intended to assist the navigation of autonomous vehicles that operate in industrial environments. The recognition system integrates color, shape and texture information together with the location of the vanishing point. The recognition process starts from some prior scene knowledge, that is, a generic model of the expected scene and the potential objects. The recognition system constitutes an approach where different low-level vision techniques extract a multitude of image descriptors which are then analyzed using a rule-based reasoning system to interpret the image content. This system has been implemented using a rule-based cooperative expert system
Resumo:
We describe a model-based objects recognition system which is part of an image interpretation system intended to assist autonomous vehicles navigation. The system is intended to operate in man-made environments. Behavior-based navigation of autonomous vehicles involves the recognition of navigable areas and the potential obstacles. The recognition system integrates color, shape and texture information together with the location of the vanishing point. The recognition process starts from some prior scene knowledge, that is, a generic model of the expected scene and the potential objects. The recognition system constitutes an approach where different low-level vision techniques extract a multitude of image descriptors which are then analyzed using a rule-based reasoning system to interpret the image content. This system has been implemented using CEES, the C++ embedded expert system shell developed in the Systems Engineering and Automatic Control Laboratory (University of Girona) as a specific rule-based problem solving tool. It has been especially conceived for supporting cooperative expert systems, and uses the object oriented programming paradigm
Resumo:
We present a simple, generic model of annual tree growth, called "T". This model accepts input from a first-principles light-use efficiency model (the "P" model). The P model provides values for gross primary production (GPP) per unit of absorbed photosynthetically active radiation (PAR). Absorbed PAR is estimated from the current leaf area. GPP is allocated to foliage, transport tissue, and fine-root production and respiration in such a way as to satisfy well-understood dimensional and functional relationships. Our approach thereby integrates two modelling approaches separately developed in the global carbon-cycle and forest-science literature. The T model can represent both ontogenetic effects (the impact of ageing) and the effects of environmental variations and trends (climate and CO2) on growth. Driven by local climate records, the model was applied to simulate ring widths during the period 1958–2006 for multiple trees of Pinus koraiensis from the Changbai Mountains in northeastern China. Each tree was initialised at its actual diameter at the time when local climate records started. The model produces realistic simulations of the interannual variability in ring width for different age cohorts (young, mature, and old). Both the simulations and observations show a significant positive response of tree-ring width to growing-season total photosynthetically active radiation (PAR0) and the ratio of actual to potential evapotranspiration (α), and a significant negative response to mean annual temperature (MAT). The slopes of the simulated and observed relationships with PAR0 and α are similar; the negative response to MAT is underestimated by the model. Comparison of simulations with fixed and changing atmospheric CO2 concentration shows that CO2 fertilisation over the past 50 years is too small to be distinguished in the ring-width data, given ontogenetic trends and interannual variability in climate.
Resumo:
This paper argues that media and communications theory, as with cultural and creative industries analysis, can benefit from a deeper understanding of economic growth theory. Economic growth theory is elucidated in the context of both cultural and media studies and with respect to modern Chinese economic development. Economic growth is a complex evolutionary process that is tightly integrated with socio-cultural and political processes. This paper seeks to explore this mechanism and to advance cultural theory from an erstwhile political economy perspective to one centred about the co-evolutionary dynamics of economic and socio-political systems. A generic model is presented in which economic and social systems co-evolve through the origination, adoption and retention of new ideas, and in which the creative industries are a key part of this process. The paper concludes that digital media capabilities are a primary source of economic development.
Resumo:
Buffer overflow vulnerabilities continue to prevail and the sophistication of attacks targeting these vulnerabilities is continuously increasing. As a successful attack of this type has the potential to completely compromise the integrity of the targeted host, early detection is vital. This thesis examines generic approaches for detecting executable payload attacks, without prior knowledge of the implementation of the attack, in such a way that new and previously unseen attacks are detectable. Executable payloads are analysed in detail for attacks targeting the Linux and Windows operating systems executing on an Intel IA-32 architecture. The execution flow of attack payloads are analysed and a generic model of execution is examined. A novel classification scheme for executable attack payloads is presented which allows for characterisation of executable payloads and facilitates vulnerability and threat assessments, and intrusion detection capability assessments for intrusion detection systems. An intrusion detection capability assessment may be utilised to determine whether or not a deployed system is able to detect a specific attack and to identify requirements for intrusion detection functionality for the development of new detection methods. Two novel detection methods are presented capable of detecting new and previously unseen executable attack payloads. The detection methods are capable of identifying and enumerating the executable payload’s interactions with the operating system on the targeted host at the time of compromise. The detection methods are further validated using real world data including executable payload attacks.
Resumo:
Most current computer systems authorise the user at the start of a session and do not detect whether the current user is still the initial authorised user, a substitute user, or an intruder pretending to be a valid user. Therefore, a system that continuously checks the identity of the user throughout the session is necessary without being intrusive to end-user and/or effectively doing this. Such a system is called a continuous authentication system (CAS). Researchers have applied several approaches for CAS and most of these techniques are based on biometrics. These continuous biometric authentication systems (CBAS) are supplied by user traits and characteristics. One of the main types of biometric is keystroke dynamics which has been widely tried and accepted for providing continuous user authentication. Keystroke dynamics is appealing for many reasons. First, it is less obtrusive, since users will be typing on the computer keyboard anyway. Second, it does not require extra hardware. Finally, keystroke dynamics will be available after the authentication step at the start of the computer session. Currently, there is insufficient research in the CBAS with keystroke dynamics field. To date, most of the existing schemes ignore the continuous authentication scenarios which might affect their practicality in different real world applications. Also, the contemporary CBAS with keystroke dynamics approaches use characters sequences as features that are representative of user typing behavior but their selected features criteria do not guarantee features with strong statistical significance which may cause less accurate statistical user-representation. Furthermore, their selected features do not inherently incorporate user typing behavior. Finally, the existing CBAS that are based on keystroke dynamics are typically dependent on pre-defined user-typing models for continuous authentication. This dependency restricts the systems to authenticate only known users whose typing samples are modelled. This research addresses the previous limitations associated with the existing CBAS schemes by developing a generic model to better identify and understand the characteristics and requirements of each type of CBAS and continuous authentication scenario. Also, the research proposes four statistical-based feature selection techniques that have highest statistical significance and encompasses different user typing behaviors which represent user typing patterns effectively. Finally, the research proposes the user-independent threshold approach that is able to authenticate a user accurately without needing any predefined user typing model a-priori. Also, we enhance the technique to detect the impostor or intruder who may take over during the entire computer session.
Resumo:
Non-rigid face alignment is a very important task in a large range of applications but the existing tracking based non-rigid face alignment methods are either inaccurate or requiring person-specific model. This dissertation has developed simultaneous alignment algorithms that overcome these constraints and provide alignment with high accuracy, efficiency, robustness to varying image condition, and requirement of only generic model.