831 resultados para Formal Methods. Component-Based Development. Competition. Model Checking


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Service researchers have repeatedly claimed that firms should acquire customer information in order to develop services that fit customer needs. Despite this, studies that would concentrate on the actual use of customer information in service development are lacking. The present study fulfils this research gap by investigating information use during a service development process. It demonstrates that use is not a straightforward task that automatically follows the acquisition of customer information. In fact, out of the six identified types of use, four represent non usage of customer information. Hence, the study demonstrates that the acquisition of customer information does not guarantee that the information will actually be used in development. The current study used an ethnographic approach. Consequently, the study was conducted in the field in real time over an extensive period of 13 months. Participant observation allowed direct access to the investigated phenomenon, i.e. the different types of use by the observed development project members were captured while they emerged. In addition, interviews, informal discussions and internal documents were used to gather data. A development process of a bank’s website constituted the empirical context of the investigation. This ethnography brings novel insights to both academia and practice. It critically questions the traditional focus on the firm’s acquisition of customer information and suggests that this focus ought to be expanded to the actual use of customer information. What is the point in acquiring costly customer information if it is not used in the development? Based on the findings of this study, a holistic view on customer information, “information in use” is generated. This view extends the traditional view of customer information in three ways: the source, timing and form of data collection. First, the study showed that the customer information can come explicitly from the customer, from speculation among the developers or it can already exist implicitly. Prior research has mainly focused on the customer as the information provider and the explicit source to turn to for information. Second, the study identified that the used and non-used customer information was acquired both previously, and currently within the time frame of the focal development process, as well as potentially in the future. Prior research has primarily focused on the currently acquired customer information, i.e. within the timeframe of the development process. Third, the used and non-used customer information was both formally and informally acquired. In prior research a large number of sophisticated formal methods have been suggested for the acquisition of customer information to be used in development. By focusing on “information in use”, new knowledge on types of customer information that are actually used was generated. For example, the findings show that the formal customer information acquired during the development process is used less than customer information already existent within the firm. With this knowledge at hand, better methods to capture this more usable customer information can be developed. Moreover, the thesis suggests that by focusing stronger on use of customer information, service development processes can be restructured in order to facilitate the information that is actually used.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The recent spurt of research activities in Entity-Relationship Approach to databases calls for a close scrutiny of the semantics of the underlying Entity-Relationship models, data manipulation languages, data definition languages, etc. For reasons well known, it is very desirable and sometimes imperative to give formal description of the semantics. In this paper, we consider a specific ER model, the generalized Entity-Relationship model (without attributes on relationships) and give denotational semantics for the model as well as a simple ER algebra based on the model. Our formalism is based on the Vienna Development Method—the meta language (VDM). We also discuss the salient features of the given semantics in detail and suggest directions for further work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Denial-of-service (DoS) attacks form a very important category of security threats that are prevalent in MIPv6 (mobile internet protocol version 6) today. Many schemes have been proposed to alleviate such threats, including one of our own [9]. However, reasoning about the correctness of such protocols is not trivial. In addition, new solutions to mitigate attacks may need to be deployed in the network on a frequent basis as and when attacks are detected, as it is practically impossible to anticipate all attacks and provide solutions in advance. This makes it necessary to validate the solutions in a timely manner before deployment in the real network. However, threshold schemes needed in group protocols make analysis complex. Model checking threshold-based group protocols that employ cryptography have not been successful so far. Here, we propose a new simulation based approach for validation using a tool called FRAMOGR that supports executable specification of group protocols that use cryptography. FRAMOGR allows one to specify attackers and track probability distributions of values or paths. We believe that infrastructure such as FRAMOGR would be required in future for validating new group based threshold protocols that may be needed for making MIPv6 more robust.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose-In the present work, a numerical method, based on the well established enthalpy technique, is developed to simulate the growth of binary alloy equiaxed dendrites in presence of melt convection. The paper aims to discuss these issues. Design/methodology/approach-The principle of volume-averaging is used to formulate the governing equations (mass, momentum, energy and species conservation) which are solved using a coupled explicit-implicit method. The velocity and pressure fields are obtained using a fully implicit finite volume approach whereas the energy and species conservation equations are solved explicitly to obtain the enthalpy and solute concentration fields. As a model problem, simulation of the growth of a single crystal in a two-dimensional cavity filled with an undercooled melt is performed. Findings-Comparison of the simulation results with available solutions obtained using level set method and the phase field method shows good agreement. The effects of melt flow on dendrite growth rate and solute distribution along the solid-liquid interface are studied. A faster growth rate of the upstream dendrite arm in case of binary alloys is observed, which can be attributed to the enhanced heat transfer due to convection as well as lower solute pile-up at the solid-liquid interface. Subsequently, the influence of thermal and solutal Peclet number and undercooling on the dendrite tip velocity is investigated. Originality/value-As the present enthalpy based microscopic solidification model with melt convection is based on a framework similar to popularly used enthalpy models at the macroscopic scale, it lays the foundation to develop effective multiscale solidification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

FreeRTOS is an open-source real-time microkernel that has a wide community of users. We present the formal specification of the behaviour of the task part of FreeRTOS that deals with the creation, management, and scheduling of tasks using priority-based preemption. Our model is written in the Z notation, and we verify its consistency using the Z/Eves theorem prover. This includes a precise statement of the preconditions for all API commands. This task model forms the basis for three dimensions of further work: (a) the modelling of the rest of the behaviour of queues, time, mutex, and interrupts in FreeRTOS; (b) refinement of the models to code to produce a verified implementation; and (c) extension of the behaviour of FreeRTOS to multi-core architectures. We propose all three dimensions as benchmark challenge problems for Hoare's Verified Software Initiative.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Partial differential equations (PDEs) with multiscale coefficients are very difficult to solve due to the wide range of scales in the solutions. In the thesis, we propose some efficient numerical methods for both deterministic and stochastic PDEs based on the model reduction technique.

For the deterministic PDEs, the main purpose of our method is to derive an effective equation for the multiscale problem. An essential ingredient is to decompose the harmonic coordinate into a smooth part and a highly oscillatory part of which the magnitude is small. Such a decomposition plays a key role in our construction of the effective equation. We show that the solution to the effective equation is smooth, and could be resolved on a regular coarse mesh grid. Furthermore, we provide error analysis and show that the solution to the effective equation plus a correction term is close to the original multiscale solution.

For the stochastic PDEs, we propose the model reduction based data-driven stochastic method and multilevel Monte Carlo method. In the multiquery, setting and on the assumption that the ratio of the smallest scale and largest scale is not too small, we propose the multiscale data-driven stochastic method. We construct a data-driven stochastic basis and solve the coupled deterministic PDEs to obtain the solutions. For the tougher problems, we propose the multiscale multilevel Monte Carlo method. We apply the multilevel scheme to the effective equations and assemble the stiffness matrices efficiently on each coarse mesh grid. In both methods, the $\KL$ expansion plays an important role in extracting the main parts of some stochastic quantities.

For both the deterministic and stochastic PDEs, numerical results are presented to demonstrate the accuracy and robustness of the methods. We also show the computational time cost reduction in the numerical examples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent developments in modeling driver steering control with preview are reviewed. While some validation with experimental data has been presented, the rigorous application of formal system identification methods has not yet been attempted. This paper describes a steering controller based on linear model-predictive control. An indirect identification method that minimizes steering angle prediction error is developed. Special attention is given to filtering the prediction error so as to avoid identification bias that arises from the closed-loop operation of the driver-vehicle system. The identification procedure is applied to data collected from 14 test drivers performing double lane change maneuvers in an instrumented vehicle. It is found that the identification procedure successfully finds parameter values for the model that give small prediction errors. The procedure is also able to distinguish between the different steering strategies adopted by the test drivers. © 2006 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The diversity of non-domestic buildings at urban scale poses a number of difficulties to develop models for large scale analysis of the stock. This research proposes a probabilistic, engineering-based, bottom-up model to address these issues. In a recent study we classified London's non-domestic buildings based on the service they provide, such as offices, retail premise, and schools, and proposed the creation of one probabilistic representational model per building type. This paper investigates techniques for the development of such models. The representational model is a statistical surrogate of a dynamic energy simulation (ES) model. We first identify the main parameters affecting energy consumption in a particular building sector/type by using sampling-based global sensitivity analysis methods, and then generate statistical surrogate models of the dynamic ES model within the dominant model parameters. Given a sample of actual energy consumption for that sector, we use the surrogate model to infer the distribution of model parameters by inverse analysis. The inferred distributions of input parameters are able to quantify the relative benefits of alternative energy saving measures on an entire building sector with requisite quantification of uncertainties. Secondary school buildings are used for illustrating the application of this probabilistic method. © 2012 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dynamism and uncertainty are real challenges for present day manufacturing enterprises (MEs). Reasons include: an increasing demand for customisation, reduced time to market, shortened product life cycles and globalisation. MEs can reduce competitive pressure by becoming reconfigurable and change-capable. However, modern manufacturing philosophies, including agile and lean, must complement the application of reconfigurable manufacturing paradigms. Choosing and applying the best philosophies and techniques is very difficult as most MEs deploy complex and unique configurations of processes and resource systems, and seek economies of scope and scale in respect of changing and distinctive product flows. It follows that systematic methods of achieving model driven reconfiguration and interoperation of component based manufacturing systems are required to design, engineer and change future MEs. This thesis, titled Enhanced Integrated Modelling Approach to Reconfiguring Manufacturing Enterprises , introduces the development and prototyping a model-driven environment for the design, engineering, optimisation and control of the reconfiguration of MEs with an embedded capability to handle various types of change. The thesis describes a novel systematic approach, namely enhanced integrated modelling approach (EIMA), in which coherent sets of integrated models are created that facilitates the engineering of MEs especially their production planning and control (PPC) systems. The developed environment supports the engineering of common types of strategic, tactical and operational processes found in many MEs. The EIMA is centred on the ISO standardised CIMOSA process modelling approach. Early study led to the development of simulation models during which various CIMOSA shortcomings were observed, especially in its support for aspects of ME dynamism. A need was raised to structure and create semantically enriched models hence forming an enhanced integrated modelling environment. The thesis also presents three industrial case examples: (1) Ford Motor Company; (2) Bradgate Furniture Manufacturing Company; and (3) ACM Bearings Company. In order to understand the system prior to realisation of any PPC strategy, multiple process segments of any target organisation need to be modelled. Coherent multi-perspective case study models are presented that have facilitated process reengineering and associated resource system configuration. Such models have a capability to enable PPC decision making processes in support of the reconfiguration of MEs. During these case studies, capabilities of a number of software tools were exploited such as Arena®, Simul8®, Plant Simulation®, MS Visio®, and MS Excel®. Case study results demonstrated effectiveness of the concepts related to the EIMA. The research has resulted in new contributions to knowledge in terms of new understandings, concepts and methods in following ways: (1) a structured model driven integrated approach to the design, optimisation and control of future reconfiguration of MEs. The EIMA is an enriched and generic process modelling approach with capability to represent both static and dynamic aspects of an ME; and (2) example application cases showing benefits in terms of reduction in lead time, cost and resource load and in terms of improved responsiveness of processes and resource systems with a special focus on PPC; (3) identification and industrial application of a new key performance indicator (KPI) known as P3C the measuring and monitoring of which can aid in enhancing reconfigurability and responsiveness of MEs; and (4) an enriched modelling concept framework (E-MUNE) to capture requirements of static and dynamic aspects of MEs where the conceptual framework has the capability to be extended and modified according to the requirements. The thesis outlines key areas outlining a need for future research into integrated modelling approaches, interoperation and updating mechanisms of partial models in support of the reconfiguration of MEs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Software-based control of life-critical embedded systems has become increasingly complex, and to a large extent has come to determine the safety of the human being. For example, implantable cardiac pacemakers have over 80,000 lines of code which are responsible for maintaining the heart within safe operating limits. As firmware-related recalls accounted for over 41% of the 600,000 devices recalled in the last decade, there is a need for rigorous model-driven design tools to generate verified code from verified software models. To this effect, we have developed the UPP2SF model-translation tool, which facilitates automatic conversion of verified models (in UPPAAL) to models that may be simulated and tested (in Simulink/Stateflow). We describe the translation rules that ensure correct model conversion, applicable to a large class of models. We demonstrate how UPP2SF is used in themodel-driven design of a pacemaker whosemodel is (a) designed and verified in UPPAAL (using timed automata), (b) automatically translated to Stateflow for simulation-based testing, and then (c) automatically generated into modular code for hardware-level integration testing of timing-related errors. In addition, we show how UPP2SF may be used for worst-case execution time estimation early in the design stage. Using UPP2SF, we demonstrate the value of integrated end-to-end modeling, verification, code-generation and testing process for complex software-controlled embedded systems. © 2014 ACM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main purpose of this paper is to provide the core description of the modelling exercise within the Shelf Edge Advection Mortality And Recruitment (SEAMAR) programme. An individual-based model (IBM) was developed for the prediction of year-to-year survival of the early life-history stages of mackerel (Scomber scombrus) in the eastern North Atlantic. The IBM is one of two components of the model system. The first component is a circulation model to provide physical input data for the IBM. The circulation model is a geographical variant of the HAMburg Shelf Ocean Model (HAMSOM). The second component is the IBM, which is an i-space configuration model in which large numbers of individuals are followed as discrete entities to simulate the transport, growth and mortality of mackerel eggs, larvae and post-larvae. Larval and post-larval growth is modelled as a function of length, temperature and food distribution; mortality is modelled as a function of length and absolute growth rate. Each particle is considered as a super-individual representing 10 super(6) eggs at the outset of the simulation, and then declining according to the mortality function. Simulations were carried out for the years 1998-2000. Results showed concentrations of particles at Porcupine Bank and the adjacent Irish shelf, along the Celtic Sea shelf-edge, and in the southern Bay of Biscay. High survival was observed only at Porcupine and the adjacent shelf areas, and, more patchily, around the coastal margin of Biscay. The low survival along the shelf-edge of the Celtic Sea was due to the consistently low estimates of food availability in that area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Glaucoma is a leading cause of avoidable blindness worldwide. Open angle glaucoma is the most common type of glaucoma. No randomised controlled trials have been conducted evaluating the effectiveness of glaucoma screening for reducing sight loss. It is unclear what the most appropriate intervention to be evaluated in any glaucoma screening trial would be. The purpose of this study was to develop the clinical components of an intervention for evaluation in a glaucoma (open angle) screening trial that would be feasible and acceptable in a UK eye-care service.

METHODS: A mixed-methods study, based on the Medical Research Council (MRC) framework for complex interventions, integrating qualitative (semi-structured interviews with 46 UK eye-care providers, policy makers and health service commissioners), and quantitative (economic modelling) methods. Interview data were synthesised and used to revise the screening interventions compared within an existing economic model.

RESULTS: The qualitative data indicated broad based support for a glaucoma screening trial to take place in primary care, using ophthalmic trained technical assistants supported by optometry input. The precise location should be tailored to local circumstances. There was variability in opinion around the choice of screening test and target population. Integrating the interview findings with cost-effectiveness criteria reduced 189 potential components to a two test intervention including either optic nerve photography or screening mode perimetry (a measure of visual field sensitivity) with or without tonometry (a measure of intraocular pressure). It would be more cost-effective, and thus acceptable in a policy context, to target screening for open angle glaucoma to those at highest risk but for both practicality and equity arguments the optimal strategy was screening a general population cohort beginning at age forty.

CONCLUSIONS: Interventions for screening for open angle glaucoma that would be feasible from a service delivery perspective were identified. Integration within an economic modelling framework explicitly highlighted the trade-off between cost-effectiveness, feasibility and equity. This study exemplifies the MRC recommendation to integrate qualitative and quantitative methods in developing complex interventions. The next step in the development pathway should encompass the views of service users.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: The transtheoretical model has been successful in promoting health behavior change in general and clinical populations. However, there is little knowledge about the application of the transtheoretical model to explain physical activity behavior in individuals with non-cystic fibrosis bronchiectasis. The aim was to examine patterns of (1) physical activity and (2) mediators of behavior change (self-efficacy, decisional balance, and processes of change) across stages of change in individuals with non-cystic fibrosis bronchiectasis.

METHODS: Fifty-five subjects with non-cystic fibrosis bronchiectasis (mean age ± SD = 63 ± 10 y) had physical activity assessed over 7 d using an accelerometer. Each component of the transtheoretical model was assessed using validated questionnaires. Subjects were divided into groups depending on stage of change: Group 1 (pre-contemplation and contemplation; n = 10), Group 2 (preparation; n = 20), and Group 3 (action and maintenance; n = 25). Statistical analyses included one-way analysis of variance and Tukey-Kramer post hoc tests.

RESULTS: Physical activity variables were significantly (P < .05) higher in Group 3 (action and maintenance) compared with Group 2 (preparation) and Group 1 (pre-contemplation and contemplation). For self-efficacy, there were no significant differences between groups for mean scores (P = .14). Decisional balance cons (barriers to being physically active) were significantly lower in Group 3 versus Group 2 (P = .032). For processes of change, substituting alternatives (substituting inactive options for active options) was significantly higher in Group 3 versus Group 1 (P = .01), and enlisting social support (seeking out social support to increase and maintain physical activity) was significantly lower in Group 3 versus Group 2 (P = .038).

CONCLUSIONS: The pattern of physical activity across stages of change is consistent with the theoretical predictions of the transtheoretical model. Constructs of the transtheoretical model that appear to be important at different stages of change include decisional balance cons, substituting alternatives, and enlisting social support. This study provides support to explore transtheoretical model-based physical activity interventions in individuals with non-cystic fibrosis bronchiectasis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O objectivo geral deste trabalho consistiu no desenvolvimento de novos sensores químicos de fibra óptica (OF) para análise de compostos orgânicos voláteis (VOCs) em ambientes industriais. A componente de detecção dos sensores desenvolvidos é constituída por uma pequena secção de fibra óptica revestida com um filme de polímero. A morfologia dos filmes poliméricos foi analisada e caracterizada por microscopia electrónica de varrimento (SEM), sendo a espessura dos filmes determinada por espectroscopia de retrodispersão de Rutherford (RBS, acrónimo do inglês Rutherford backscattering spectrometry). O desempenho analítico dos sensores de OF foi avaliado relativamente a diferentes parâmetros operacionais, tais como, concentração da solução de revestimento, técnica de deposição do filme polimérico, temperatura da célula de injecção, temperatura de cura do material polimérico, caudal do gás de arraste, comprimento de onda e frequência de funcionamento do laser, configurações estruturais da célula de injecção e do tubo analítico. Foram desenvolvidos dois sensores de OF a operar na região do infravermelho para determinação de diferentes classes de VOCs, nomeadamente hidrocarbonetos aromáticos, clorados e alifáticos, além de álcoois. Os sensores de OF desenvolvidos apresentaram adequadas características analíticas em termos de sensibilidade, linearidade, repetitibilidade e reprodutibilidade do sinal analítico, sendo o tempo de resposta de aproximadamente 30 segundos. Foi também desenvolvido um sensor de OF para especiação de benzeno, tolueno e o-xileno a operar na região do visível (635 - 650 nm), tendo sido aplicado à análise de amostras reais de ar de uma indústria de solventes. Relativamente à monitorização de VOCs em ambientes industriais, foi desenvolvido um sensor de OF para monitorização in situ e de forma remota (até uma distância máxima de 60 metros do local de amostragem) de benzeno, tolueno, etilbenzeno, p-xileno, m-xileno e o-xileno (BTEX), utilizando um díodo laser a 1550 nm. O desempenho analítico do sensor desenvolvido foi comparado, para a determinação de BTEX, com a cromatografia gasosa acoplada à detecção com ionização de chama (GC-FID). Foram ainda desenvolvidos dois detectores de fibra óptica acoplados a um cromatógrafo de gás para especiação de álcoois e hidrocarbonetos aromáticos. A metodologia desenvolvida baseada em cromatografia gasosa acoplada a um detector de fibra óptica (GC-OF) foi aplicada à análise de amostras reais de ar de uma indústria de solventes, comparando os respectivos resultados com os obtidos por GC-FID. Por fim foi efectuado um estudo visando a obtenção de um modelo geral para a resposta analítica dos sensores de fibra óptica desenvolvidos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.