968 resultados para Object Model
Resumo:
This paper describes a practical application of MDA and reverse engineering based on a domain-specific modelling language. A well defined metamodel of a domain-specific language is useful for verification and validation of associated tools. We apply this approach to SIFA, a security analysis tool. SIFA has evolved as requirements have changed, and it has no metamodel. Hence, testing SIFA’s correctness is difficult. We introduce a formal metamodelling approach to develop a well-defined metamodel of the domain. Initially, we develop a domain model in EMF by reverse engineering the SIFA implementation. Then we transform EMF to Object-Z using model transformation. Finally, we complete the Object-Z model by specifying system behavior. The outcome is a well-defined metamodel that precisely describes the domain and the security properties that it analyses. It also provides a reliable basis for testing the current SIFA implementation and forward engineering its successor.
Resumo:
This paper presents a method of formally specifying, refining and verifying concurrent systems which uses the object-oriented state-based specification language Object-Z together with the process algebra CSP. Object-Z provides a convenient way of modelling complex data structures needed to define the component processes of such systems, and CSP enables the concise specification of process interactions. The basis of the integration is a semantics of Object-Z classes identical to that of CSP processes. This allows classes specified in Object-Z to he used directly within the CSP part of the specification. In addition to specification, we also discuss refinement and verification in this model. The common semantic basis enables a unified method of refinement to be used, based upon CSP refinement. To enable state-based techniques to be used fur the Object-Z components of a specification we develop state-based refinement relations which are sound and complete with respect to CSP refinement. In addition, a verification method for static and dynamic properties is presented. The method allows us to verify properties of the CSP system specification in terms of its component Object-Z classes by using the laws of the the CSP operators together with the logic for Object-Z.
Resumo:
At the core of the analysis task in the development process is information systems requirements modelling, Modelling of requirements has been occurring for many years and the techniques used have progressed from flowcharting through data flow diagrams and entity-relationship diagrams to object-oriented schemas today. Unfortunately, researchers have been able to give little theoretical guidance only to practitioners on which techniques to use and when. In an attempt to address this situation, Wand and Weber have developed a series of models based on the ontological theory of Mario Bunge-the Bunge-Wand-Weber (BWW) models. Two particular criticisms of the models have persisted however-the understandability of the constructs in the BWW models and the difficulty in applying the models to a modelling technique. This paper addresses these issues by presenting a meta model of the BWW constructs using a meta language that is familiar to many IS professionals, more specific than plain English text, but easier to understand than the set-theoretic language of the original BWW models. Such a meta model also facilitates the application of the BWW theory to other modelling techniques that have similar meta models defined. Moreover, this approach supports the identification of patterns of constructs that might be common across meta models for modelling techniques. Such findings are useful in extending and refining the BWW theory. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
This paper is concerned with methods for refinement of specifications written using a combination of Object-Z and CSP. Such a combination has proved to be a suitable vehicle for specifying complex systems which involve state and behaviour, and several proposals exist for integrating these two languages. The basis of the integration in this paper is a semantics of Object-Z classes identical to CSP processes. This allows classes specified in Object-Z to be combined using CSP operators. It has been shown that this semantic model allows state-based refinement relations to be used on the Object-Z components in an integrated Object-Z/CSP specification. However, the current refinement methodology does not allow the structure of a specification to be changed in a refinement, whereas a full methodology would, for example, allow concurrency to be introduced during the development life-cycle. In this paper, we tackle these concerns and discuss refinements of specifications written using Object-Z and CSP where we change the structure of the specification when performing the refinement. In particular, we develop a set of structural simulation rules which allow single components to be refined to more complex specifications involving CSP operators. The soundness of these rules is verified against the common semantic model and they are illustrated via a number of examples.
Resumo:
Rapid prototyping (RP) is an approach for automatically building a physical object through solid freeform fabrication. Nowadays, RP has become a vital aspect of most product development processes, due to the significant competitive advantages it offers compared to traditional manual model making. Even in academic environments, it is important to be able to quickly create accurate physical representations of concept solutions. Some of these can be used for simple visual validation, while others can be employed for ergonomic assessment by potential users or even for physical testing. However, the cost of traditional RP methods prevents their use in most academic environments on a regular basis, and even for very preliminary prototypes in many small companies. That results in delaying the first physical prototypes to later stages, or creating very rough mock-ups which are not as useful as they could be. In this paper we propose an approach for rapid and inexpensive model-making, which was developed in an academic context, and which can be employed for a variety of objects.
Resumo:
Dynamical systems theory in this work is used as a theoretical language and tool to design a distributed control architecture for a team of three robots that must transport a large object and simultaneously avoid collisions with either static or dynamic obstacles. The robots have no prior knowledge of the environment. The dynamics of behavior is defined over a state space of behavior variables, heading direction and path velocity. Task constraints are modeled as attractors (i.e. asymptotic stable states) of the behavioral dynamics. For each robot, these attractors are combined into a vector field that governs the behavior. By design the parameters are tuned so that the behavioral variables are always very close to the corresponding attractors. Thus the behavior of each robot is controlled by a time series of asymptotical stable states. Computer simulations support the validity of the dynamical model architecture.
Resumo:
In this paper dynamical systems theory is used as a theoretical language and tool to design a distributed control architecture for a team of two robots that must transport a large object and simultaneously avoid collisions with obstacles (either static or dynamic). This work extends the previous work with two robots (see [1] and [5]). However here we demonstrate that it’s possible to simplify the architecture presented in [1] and [5] and reach an equally stable global behavior. The robots have no prior knowledge of the environment. The dynamics of behavior is defined over a state space of behavior variables, heading direction and path velocity. Task constrains are modeled as attractors (i.e. asymptotic stable states) of a behavioral dynamics. For each robot, these attractors are combined into a vector field that governs the behavior. By design the parameters are tuned so that the behavioral variables are always very close to the corresponding attractors. Thus the behavior of each robot is controlled by a time series of asymptotic stable states. Computer simulations support the validity of the dynamical model architecture.
Resumo:
Dynamical systems theory is used here as a theoretical language and tool to design a distributed control architecture for a team of two mobile robots that must transport a long object and simultaneously avoid obstacles. In this approach the level of modeling is at the level of behaviors. A “dynamics” of behavior is defined over a state space of behavioral variables (heading direction and path velocity). The environment is also modeled in these terms by representing task constraints as attractors (i.e. asymptotically stable states) or reppelers (i.e. unstable states) of behavioral dynamics. For each robot attractors and repellers are combined into a vector field that governs the behavior. The resulting dynamical systems that generate the behavior of the robots may be nonlinear. By design the systems are tuned so that the behavioral variables are always very close to one attractor. Thus the behavior of each robot is controled by a time series of asymptotically stable states. Computer simulations support the validity of our dynamic model architectures.
Resumo:
In this paper we introduce a formation control loop that maximizes the performance of the cooperative perception of a tracked target by a team of mobile robots, while maintaining the team in formation, with a dynamically adjustable geometry which is a function of the quality of the target perception by the team. In the formation control loop, the controller module is a distributed non-linear model predictive controller and the estimator module fuses local estimates of the target state, obtained by a particle filter at each robot. The two modules and their integration are described in detail, including a real-time database associated to a wireless communication protocol that facilitates the exchange of state data while reducing collisions among team members. Simulation and real robot results for indoor and outdoor teams of different robots are presented. The results highlight how our method successfully enables a team of homogeneous robots to minimize the total uncertainty of the tracked target cooperative estimate while complying with performance criteria such as keeping a pre-set distance between the teammates and the target, avoiding collisions with teammates and/or surrounding obstacles.
Resumo:
The reported productivity gains while using models and model transformations to develop entire systems, after almost a decade of experience applying model-driven approaches for system development, are already undeniable benefits of this approach. However, the slowness of higher-level, rule based model transformation languages hinders the applicability of this approach to industrial scales. Lower-level, and efficient, languages can be used but productivity and easy maintenance seize to exist. The abstraction penalty problem is not new, it also exists for high-level, object oriented languages but everyone is using them now. Why is not everyone using rule based model transformation languages then? In this thesis, we propose a framework, comprised of a language and its respective environment, designed to tackle the most performance critical operation of high-level model transformation languages: the pattern matching. This framework shows that it is possible to mitigate the performance penalty while still using high-level model transformation languages.
Resumo:
Past multisensory experiences can influence current unisensory processing and memory performance. Repeated images are better discriminated if initially presented as auditory-visual pairs, rather than only visually. An experience's context thus plays a role in how well repetitions of certain aspects are later recognized. Here, we investigated factors during the initial multisensory experience that are essential for generating improved memory performance. Subjects discriminated repeated versus initial image presentations intermixed within a continuous recognition task. Half of initial presentations were multisensory, and all repetitions were only visual. Experiment 1 examined whether purely episodic multisensory information suffices for enhancing later discrimination performance by pairing visual objects with either tones or vibrations. We could therefore also assess whether effects can be elicited with different sensory pairings. Experiment 2 examined semantic context by manipulating the congruence between auditory and visual object stimuli within blocks of trials. Relative to images only encountered visually, accuracy in discriminating image repetitions was significantly impaired by auditory-visual, yet unaffected by somatosensory-visual multisensory memory traces. By contrast, this accuracy was selectively enhanced for visual stimuli with semantically congruent multisensory pasts and unchanged for those with semantically incongruent multisensory pasts. The collective results reveal opposing effects of purely episodic versus semantic information from auditory-visual multisensory events. Nonetheless, both types of multisensory memory traces are accessible for processing incoming stimuli and indeed result in distinct visual object processing, leading to either impaired or enhanced performance relative to unisensory memory traces. We discuss these results as supporting a model of object-based multisensory interactions.
Resumo:
Background: Glutathione (GSH), a major cellular redox regulator and antioxidant, is decreased in cerebrospinal fluid and prefrontal cortex of schizophrenia patients. The gene of the key GSH-synthesizing enzyme, glutamate-cysteine ligase, modifier (GCLM) subunit, is associated with schizophrenia, suggesting that the deficit in the GSH system is of genetic origin. Using the GCLM knock-out (KO) mouse as model system with 60% decreased brain GSH levels and, thus, strong vulnerability to oxidative stress, we have shown that GSH dysregulation results in abnormal mouse brain morphology (e.g., reduced parvalbumin, PV, immuno-reactivity in frontal areas) and function. Additional oxidative stress, induced by GBR12909 (a dopamine re-uptake inhibitor), enhances morphological changes even further. Aim: In the present study we use the GCLM KO mouse model system, asking now, whether GSH dysregulation also compromises mouse behaviour and cognition. Methods: Male and female wildtype (WT) and GCLM-KO mice are treated with GBR12909 or phosphate buffered saline (PBS) from postnatal day (P) 5 to 10, and are behaviourally tested at P 60 and older. Results: In comparison to WT, KO animals of both sexes are hyperactive in the open field, display more frequent open arm entries on the elevated plus maze, longer float latencies in the Porsolt swim test, and more frequent contacts of novel and familiar objects. Contrary to other reports of animal models with reduced PV immuno-reactivity, GCLM-KO mice display normal rule learning capacity and perform normally on a spatial recognition task. GCLM-KO mice do, however, show a strong deficit in object-recognition after a 15 minutes retention delay. GBR12909 treatment exerts no additional effect. Conclusions: The results suggest that animals with impaired regulation of brain oxidative stress are impulsive and have reduced behavioural control in novel, unpredictable contexts. Moreover, GSH dysregulation seems to induce a selective attentional or stimulus-encoding deficit: despite intensive object exploration, GCLM-KO mice cannot discriminate between novel and familiar objects. In conclusion, the present data indicate that GSH dysregulation may contribute to the manifestation of behavioural and cognitive anomalies that are associated with schizophrenia.
Resumo:
Given a set of images of scenes containing different object categories (e.g. grass, roads) our objective is to discover these objects in each image, and to use this object occurrences to perform a scene classification (e.g. beach scene, mountain scene). We achieve this by using a supervised learning algorithm able to learn with few images to facilitate the user task. We use a probabilistic model to recognise the objects and further we classify the scene based on their object occurrences. Experimental results are shown and evaluated to prove the validity of our proposal. Object recognition performance is compared to the approaches of He et al. (2004) and Marti et al. (2001) using their own datasets. Furthermore an unsupervised method is implemented in order to evaluate the advantages and disadvantages of our supervised classification approach versus an unsupervised one
Resumo:
In this paper we address a problem arising in risk management; namely the study of price variations of different contingent claims in the Black-Scholes model due to anticipating future events. The method we propose to use is an extension of the classical Vega index, i.e. the price derivative with respect to the constant volatility, in thesense that we perturb the volatility in different directions. Thisdirectional derivative, which we denote the local Vega index, will serve as the main object in the paper and one of the purposes is to relate it to the classical Vega index. We show that for all contingent claims studied in this paper the local Vega index can be expressed as a weighted average of the perturbation in volatility. In the particular case where the interest rate and the volatility are constant and the perturbation is deterministic, the local Vega index is an average of this perturbation multiplied by the classical Vega index. We also study the well-known goal problem of maximizing the probability of a perfect hedge and show that the speed of convergence is in fact dependent of the local Vega index.
Resumo:
We study a situation in which an auctioneer wishes to sell an object toone of N risk-neutral bidders with heterogeneous preferences. Theauctioneer does not know bidders preferences but has private informationabout the characteristics of the ob ject, and must decide how muchinformation to reveal prior to the auction. We show that the auctioneerhas incentives to release less information than would be efficient andthat the amount of information released increases with the level ofcompetition (as measured by the number of bidders). Furthermore, in aperfectly competitive market the auctioneer would provide the efficientlevel of information.