904 resultados para Process-dissociation Framework
Resumo:
Test templates and a test template framework are introduced as useful concepts in specification-based testing. The framework can be defined using any model-based specification notation and used to derive tests from model-based specifications-in this paper, it is demonstrated using the Z notation. The framework formally defines test data sets and their relation to the operations in a specification and to other test data sets, providing structure to the testing process. Flexibility is preserved, so that many testing strategies can be used. Important application areas of the framework are discussed, including refinement of test data, regression testing, and test oracles.
Resumo:
This paper develops a general framework for valuing a wide range of derivative securities. Rather than focusing on the stochastic process of the underlying security and developing an instantaneously-riskless hedge portfolio, we focus on the terminal distribution of the underlying security. This enables the derivative security to be valued as the weighted sum of a number of component pieces. The component pieces are simply the different payoffs that the security generates in different states of the world, and they are weighted by the probability of the particular state of the world occurring. A full set of derivations is provided. To illustrate its use, the valuation framework is applied to plain-vanilla call and put options, as well as a range of derivatives including caps, floors, collars, supershares, and digital options.
Resumo:
Information processing accounts propose that autonomic orienting reflects the amount of resources allocated to process a stimulus. However, secondary task reaction time (RT), a supposed measure of processing resources, has shown a dissociation from autonomic orienting. The present study tested the hypothesis that secondary task RT reflects a serial processing mechanism. Participants (N = 24) were presented with circle and ellipse shapes and asked to count the number of longer-than-usual presentations of one shape (task-relevant) and to ignore presentations of a second shape (task-irrelevant). Concurrent with the counting task, participants performed a secondary RT task to an auditory probe presented at either a high or low intensity and at two different probe positions following shape onset (50 and 300 ms). Electrodermal orienting was larger during task-relevant shapes than during task-irrelevant shapes, but secondary task RT to the high-intensity probe was slower during the latter. In addition, an underadditive interaction between probe stimulus intensity and probe position was found in secondary RT. The findings are consistent with a serial processing model of secondary RT and suggest that the notion of processing stages should be incorporated into current information-processing models of autonomic orienting.
Resumo:
This paper proposes a template for modelling complex datasets that integrates traditional statistical modelling approaches with more recent advances in statistics and modelling through an exploratory framework. Our approach builds on the well-known and long standing traditional idea of 'good practice in statistics' by establishing a comprehensive framework for modelling that focuses on exploration, prediction, interpretation and reliability assessment, a relatively new idea that allows individual assessment of predictions. The integrated framework we present comprises two stages. The first involves the use of exploratory methods to help visually understand the data and identify a parsimonious set of explanatory variables. The second encompasses a two step modelling process, where the use of non-parametric methods such as decision trees and generalized additive models are promoted to identify important variables and their modelling relationship with the response before a final predictive model is considered. We focus on fitting the predictive model using parametric, non-parametric and Bayesian approaches. This paper is motivated by a medical problem where interest focuses on developing a risk stratification system for morbidity of 1,710 cardiac patients given a suite of demographic, clinical and preoperative variables. Although the methods we use are applied specifically to this case study, these methods can be applied across any field, irrespective of the type of response.
Resumo:
This paper describes a process-based metapopulation dynamics and phenology model of prickly acacia, Acacia nilotica, an invasive alien species in Australia. The model, SPAnDX, describes the interactions between riparian and upland sub-populations of A. nilotica within livestock paddocks, including the effects of extrinsic factors such as temperature, soil moisture availability and atmospheric concentrations of carbon dioxide. The model includes the effects of management events such as changing the livestock species or stocking rate, applying fire, and herbicide application. The predicted population behaviour of A. nilotica was sensitive to climate. Using 35 years daily weather datasets for five representative sites spanning the range of conditions that A. nilotica is found in Australia, the model predicted biomass levels that closely accord with expected values at each site. SPAnDX can be used as a decision-support tool in integrated weed management, and to explore the sensitivity of cultural management practices to climate change throughout the range of A. nilotica. The cohort-based DYMEX modelling package used to build and run SPAnDX provided several advantages over more traditional population modelling approaches (e.g. an appropriate specific formalism (discrete time, cohort-based, process-oriented), user-friendly graphical environment, extensible library of reusable components, and useful and flexible input/output support framework). (C) 2003 Published by Elsevier Science B.V.
Resumo:
Over the last decade, software architecture emerged as a critical design step in Software Engineering. This encompassed a shift from traditional programming towards the deployment and assembly of independent components. The specification of the overall system structure, on the one hand, and of the interactions patterns between its components, on the other, became a major concern for the working developer. Although a number of formalisms to express behaviour and supply the indispensable calculational power to reason about designs, are available, the task of deriving architectural designs on top of popular component platforms has remained largely informal. This paper introduces a systematic approach to derive, from behavioural specifications written in Ccs, the corresponding architectural skeletons in the Microsoft .Net framework in the form of executable C] code. Such prototyping process is automated by means of a specific tool developed in Haskell
Resumo:
New Public Management (NPM) led to great pressures for to introduce and adapt businesslike accounting in the public sector (Hood, 1995; Lapsley, 2008; Lapsley et al., 2009), specially the transition from cash basis to accrual-based accounting. In consequence, since the last 20 years we assist to a movement towards internationally standardized of public sector accounting that led to the publication of 32 International Public Sector Accounting Standards (IPSAS) for all public sector entities from national central governments to local governments (IFAC, 2008). These standards are accrual-basis and they emphasize the balance sheet approach, the fair value measurement and the revenue-expense approach (Hints, 2007). The main innovations are associated with the use of the balance sheet approach and the fair value measurement because, traditionally, public accounting systems are mainly focused on the revenue-expense approach and on historical cost valuation (Oulasvirta, 2014).
Resumo:
Purpose/objectives: This paper seeks to investigate whether performance management (PM) framework adopted in Portuguese local government (PLG) fit the Otley’s PM framework (1999). In particularly, the research questions are (1) whether PM framework adopted in PLG (SIADAP) fit the Otley´s framework, and (2) how local politicians (aldermen) see the operation of performance management systems (PMS) in PLG (focusing on the goal-setting process and incentive and reward structures). Theoretical positioning/contributions: With this paper we intend to contribute to literature on how the Otley’s PM framework can guide empirical research about the operation of PMS. In particular, the paper contributes to understand the fit between PMS implemented in PLG and the Otley´s PM framework. The analysis of this fit can be a good contribution to understand if PMS are used in PLG as a management tool or as a strategic response to external pressures (based on interviews conducted to aldermen). We believe that the Otley’s PM framework, as well as the extended PM framework presented by Ferreira and Otley (2009), can provide a useful research tool to understand the operation of PMS in PLG. Research method: The first research question is the central issue in this paper and is analyzed based on the main reforms introduced by Portuguese government on PM of public organizations (like municipalities). On the other hand, interviews conducted on three larger Portuguese municipalities (Oporto, Braga, and Matosinhos) show how aldermen see the operation of PMS in PLG, highlighting the goals setting process with targets associated and the existing of incentive and reward structures linked with performance. Findings: Generally we find that formal and regulated PM frameworks in PLG fit the main issues of the Otley’s PM framework. However, regarding the aldermen perceptions about PMS in practice we find a gap between theory and practice, especially regarding difficulties associated with the lack of a culture of goals and targets setting and the lack of incentive and reward structures linked with performance.
USE AND CONSEQUENCES OF PARTICIPATORY GIS IN A MEXICAN MUNICIPALITY: APPLYING A MULTILEVEL FRAMEWORK
Resumo:
This paper seeks to understand the use and the consequences of Participatory Geographic Information System (PGIS) in a Mexican local community. A multilevel framework was applied, mainly influenced by two theoretical lenses – structurationist view and social shaping of technology – structured in three dimensions – context, process and content – according to contextualist logic. The results of our study have brought two main contributions. The first is the refinement of the theoretical framework in order to better investigate the implementation and use of Information and Communication Technology (ICT) artifacts by local communities for social and environmental purposes. The second contribution is the extension of existing IS (Information Systems) literature on participatory practices through identification of important conditions for helping the mobilization of ICT as a tool for empowering local communities.
Resumo:
This paper proposes a novel framework for modelling the Value for the Customer, the so-called the Conceptual Model for Decomposing Value for the Customer (CMDVC). This conceptual model is first validated through an exploratory case study where the authors validate both the proposed constructs of the model and their relations. In a second step the authors propose a mathematical formulation for the CMDVC as well as a computational method. This has enabled the final quantitative discussion of how the CMDVC can be applied and used in the enterprise environment, and the final validation by the people in the enterprise. Along this research, we were able to confirm that the results of this novel quantitative approach to model the Value for the Customer is consistent with the company's empirical experience. The paper further discusses the merits and limitations of this approach, proposing that the model is likely to bring value to support not only the contract preparation at an Ex-Ante Negotiation Phase, as demonstrated, but also along the actual negotiation process, as finally confirmed by an enterprise testimonial.
Resumo:
Processes are a central entity in enterprise collaboration. Collaborative processes need to be executed and coordinated in a distributed Computational platform where computers are connected through heterogeneous networks and systems. Life cycle management of such collaborative processes requires a framework able to handle their diversity based on different computational and communication requirements. This paper proposes a rational for such framework, points out key requirements and proposes it strategy for a supporting technological infrastructure. Beyond the portability of collaborative process definitions among different technological bindings, a framework to handle different life cycle phases of those definitions is presented and discussed. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
E-Learning frameworks are conceptual tools to organize networks of elearning services. Most frameworks cover areas that go beyond the scope of e-learning, from course to financial management, and neglects the typical activities in everyday life of teachers and students at schools such as the creation, delivery, resolution and evaluation of assignments. This paper presents the Ensemble framework - an e-learning framework exclusively focused on the teaching-learning process through the coordination of pedagogical services. The framework presents an abstract data, integration and evaluation model based on content and communications specifications. These specifications must base the implementation of networks in specialized domains with complex evaluations. In this paper we specialize the framework for two domains with complex evaluation: computer programming and computer-aided design (CAD). For each domain we highlight two Ensemble hotspots: data and evaluations procedures. In the former we formally describe the exercise and present possible extensions. In the latter, we describe the automatic evaluation procedures.
Resumo:
In this paper we present a framework for managing QoS-aware applications in a dynamic, ad-hoc, distributed environment. This framework considers an available set of wireless/mobile and fixed nodes, which may temporally form groups in order to process a set of related services, and where there is the need to support different levels of service and different combinations of quality requirements. This framework is being developed both for testing and validating an approach, based on multidimensional QoS properties, which provides service negotiation and proposal evaluation algorithms, and for assessing the suitability of the Ada language to be used in the context of dynamic, QoS-aware systems.
Resumo:
The foresight and scenario building methods can be an interesting reference for social sciences, especially in terms of innovative methods for labour process analysis. A scenario – as a central concept for the prospective analysis – can be considered as a rich and detailed portrait of a plausible future world. It can be a useful tool for policy-makers to grasp problems clearly and comprehensively, and to better pinpoint challenges as well as opportunities in an overall framework. The features of the foresight methods are being used in some labour policy making experiences. Case studies developed in Portugal will be presented, and some conclusions will be drawn in order to organise a set of principles for foresight analysis applied to the European project WORKS on the work organisation re-structuring in the knowledge society, and on the work design methods for new management structures of virtual organisations.
Resumo:
Workflows have been successfully applied to express the decomposition of complex scientific applications. This has motivated many initiatives that have been developing scientific workflow tools. However the existing tools still lack adequate support to important aspects namely, decoupling the enactment engine from workflow tasks specification, decentralizing the control of workflow activities, and allowing their tasks to run autonomous in distributed infrastructures, for instance on Clouds. Furthermore many workflow tools only support the execution of Direct Acyclic Graphs (DAG) without the concept of iterations, where activities are executed millions of iterations during long periods of time and supporting dynamic workflow reconfigurations after certain iteration. We present the AWARD (Autonomic Workflow Activities Reconfigurable and Dynamic) model of computation, based on the Process Networks model, where the workflow activities (AWA) are autonomic processes with independent control that can run in parallel on distributed infrastructures, e. g. on Clouds. Each AWA executes a Task developed as a Java class that implements a generic interface allowing end-users to code their applications without concerns for low-level details. The data-driven coordination of AWA interactions is based on a shared tuple space that also enables support to dynamic workflow reconfiguration and monitoring of the execution of workflows. We describe how AWARD supports dynamic reconfiguration and discuss typical workflow reconfiguration scenarios. For evaluation we describe experimental results of AWARD workflow executions in several application scenarios, mapped to a small dedicated cluster and the Amazon (Elastic Computing EC2) Cloud.