32 resultados para Factory of software
Resumo:
Many software engineers have found that it is difficult to understand, incorporate and use different formal models consistently in the process of software developments, especially for large and complex software systems. This is mainly due to the complex mathematical nature of the formal methods and the lack of tool support. It is highly desirable to have software models and their related software artefacts systematically connected and used collaboratively, rather than in isolation. The success of the Semantic Web, as the next generation of Web technology, can have profound impact on the environment for formal software development. It allows both the software engineers and machines to understand the content of formal models and supports more effective software design in terms of understanding, sharing and reusing in a distributed manner. To realise the full potential of the Semantic Web in formal software development, effectively creating proper semantic metadata for formal software models and their related software artefacts is crucial. This paper proposed a framework that allows users to interconnect the knowledge about formal software models and other related documents using the semantic technology. We first propose a methodology with tool support is proposed to automatically derive ontological metadata from formal software models and semantically describe them. We then develop a Semantic Web environment for representing and sharing formal Z/OZ models. A method with prototype tool is presented to enhance semantic query to software models and other artefacts. © 2014.
Resumo:
In the specific area of software engineering (SE) for self-adaptive systems (SASs) there is a growing research awareness about the synergy between SE and artificial intelligence (AI). However, just few significant results have been published so far. In this paper, we propose a novel and formal Bayesian definition of surprise as the basis for quantitative analysis to measure degrees of uncertainty and deviations of self-adaptive systems from normal behavior. A surprise measures how observed data affects the models or assumptions of the world during runtime. The key idea is that a "surprising" event can be defined as one that causes a large divergence between the belief distributions prior to and posterior to the event occurring. In such a case the system may decide either to adapt accordingly or to flag that an abnormal situation is happening. In this paper, we discuss possible applications of Bayesian theory of surprise for the case of self-adaptive systems using Bayesian dynamic decision networks. Copyright © 2014 ACM.
Resumo:
Software architecture plays an essential role in the high level description of a system design, where the structure and communication are emphasized. Despite its importance in the software engineering process, the lack of formal description and automated verification hinders the development of good software architecture models. In this paper, we present an approach to support the rigorous design and verification of software architecture models using the semantic web technology. We view software architecture models as ontology representations, where their structures and communication constraints are captured by the Web Ontology Language (OWL) and the Semantic Web Rule Language (SWRL). Specific configurations on the design are represented as concrete instances of the ontology, to which their structures and dynamic behaviors must conform. Furthermore, ontology reasoning tools can be applied to perform various automated verification on the design to ensure correctness, such as consistency checking, style recognition, and behavioral inference.
Resumo:
This thesis is about the study of relationships between experimental dynamical systems. The basic approach is to fit radial basis function maps between time delay embeddings of manifolds. We have shown that under certain conditions these maps are generically diffeomorphisms, and can be analysed to determine whether or not the manifolds in question are diffeomorphically related to each other. If not, a study of the distribution of errors may provide information about the lack of equivalence between the two. The method has applications wherever two or more sensors are used to measure a single system, or where a single sensor can respond on more than one time scale: their respective time series can be tested to determine whether or not they are coupled, and to what degree. One application which we have explored is the determination of a minimum embedding dimension for dynamical system reconstruction. In this special case the diffeomorphism in question is closely related to the predictor for the time series itself. Linear transformations of delay embedded manifolds can also be shown to have nonlinear inverses under the right conditions, and we have used radial basis functions to approximate these inverse maps in a variety of contexts. This method is particularly useful when the linear transformation corresponds to the delay embedding of a finite impulse response filtered time series. One application of fitting an inverse to this linear map is the detection of periodic orbits in chaotic attractors, using suitably tuned filters. This method has also been used to separate signals with known bandwidths from deterministic noise, by tuning a filter to stop the signal and then recovering the chaos with the nonlinear inverse. The method may have applications to the cancellation of noise generated by mechanical or electrical systems. In the course of this research a sophisticated piece of software has been developed. The program allows the construction of a hierarchy of delay embeddings from scalar and multi-valued time series. The embedded objects can be analysed graphically, and radial basis function maps can be fitted between them asynchronously, in parallel, on a multi-processor machine. In addition to a graphical user interface, the program can be driven by a batch mode command language, incorporating the concept of parallel and sequential instruction groups and enabling complex sequences of experiments to be performed in parallel in a resource-efficient manner.
Resumo:
This research investigates the general user interface problems in using networked services. Some of the problems are: users have to recall machine names and procedures to. invoke networked services; interactions with some of the services are by means of menu-based interfaces which are quite cumbersome to use; inconsistencies exist between the interfaces for different services because they were developed independently. These problems have to be removed so that users can use the services effectively. A prototype system has been developed to help users interact with networked services. This consists of software which gives the user an easy and consistent interface with the various services. The prototype is based on a graphical user interface and it includes the following appJications: Bath Information & Data Services; electronic mail; file editor. The prototype incorporates an online help facility to assist users using the system. The prototype can be divided into two parts: the user interface part that manages interactlon with the user; the communicatIon part that enables the communication with networked services to take place. The implementation is carried out using an object-oriented approach where both the user interface part and communication part are objects. The essential characteristics of object-orientation, - abstraction, encapsulation, inheritance and polymorphism - can all contribute to the better design and implementation of the prototype. The Smalltalk Model-View-Controller (MVC) methodology has been the framework for the construction of the prototype user interface. The purpose of the development was to study the effectiveness of users interaction to networked services. Having completed the prototype, tests users were requested to use the system to evaluate its effectiveness. The evaluation of the prototype is based on observation, i.e. observing the way users use the system and the opinion rating given by the users. Recommendations to improve further the prototype are given based on the results of the evaluation. based on the results of the evah:1ation. . .'. " "', ':::' ,n,<~;'.'
Resumo:
Cold roll forming is an extremely important but little studied sheet metal forming process. In this thesis, the process of cold roll forming is introduced and it is seen that form roll design is central to the cold roll forming process. The conventional design and manufacture of form rolls is discussed and it is observed that surrounding the design process are a number of activities which although peripheral are time consuming and a possible source of error. A CAD/CAM system is described which alleviates many of the problems traditional to form roll design. New techniques for the calculation of strip length and controlling the means of forming bends are detailed. The CAD/CAM system's advantages and limitations are discussed and, whilst the system has numerous significant advantages, its principal limitation can be said to be the need to manufacture form rolls and test them on a mill before a design can be stated satisfactory. A survey of the previous theoretical and experimental analysis of cold roll forming is presented and is found to be limited. By considering the previous work, a method of numerical analysis of the cold roll forming process is proposed based on a minimum energy approach. Parallel to the numerical analysis, a comprehensive range of software has been developed to enhance the designer's visualisation of the effects of his form roll design. A complementary approach to the analysis of form roll design is the generation of form roll design, a method for the partial generation of designs is described. It is suggested that the two approaches should continue in parallel and that the limitation of each approach is knowledge of the cold roll forming process. Hence, an initial experimental investigation of the rolling of channel sections is described. Finally, areas of potential future work are discussed.
River basin surveillance using remotely sensed data: a water resources information management system
Resumo:
This thesis describes the development of an operational river basin water resources information management system. The river or drainage basin is the fundamental unit of the system; in both the modelling and prediction of hydrological processes, and in the monitoring of the effect of catchment management policies. A primary concern of the study is the collection of sufficient and sufficiently accurate information to model hydrological processes. Remote sensing, in combination with conventional point source measurement, can be a valuable source of information, but is often overlooked by hydrologists, due to the cost of acquisition and processing. This thesis describes a number of cost effective methods of acquiring remotely sensed imagery, from airborne video survey to real time ingestion of meteorological satellite data. Inexpensive micro-computer systems and peripherals are used throughout to process and manipulate the data. Spatial information systems provide a means of integrating these data with topographic and thematic cartographic data, and historical records. For the system to have any real potential the data must be stored in a readily accessible format and be easily manipulated within the database. The design of efficient man-machine interfaces and the use of software enginering methodologies are therefore included in this thesis as a major part of the design of the system. The use of low cost technologies, from micro-computers to video cameras, enables the introduction of water resources information management systems into developing countries where the potential benefits are greatest.
Resumo:
The process framework comprises three phases, as follows: scope the supply chain/network; identify the options for supply system architecture and select supply system architecture. It facilitates a structured approach that analyses the supply chain/network contextual characteristics, in order to ensure alignment with the appropriate supply system architecture. The process framework was derived from comprehensive literature review and archival case study analysis. The review led to the classification of supply system architectures according to their orientation, whether integrated; partially integrated; co-ordinated or independent. The classification was combined with the characteristics that influence the selection of supply system architecture to encapsulate the conceptual framework. It builds upon existing frameworks and methodologies by focusing on structured procedure; supporting project management; facilitating participation and clarifying point of entry. The process framework was initially tested in three case study applications from the food, automobile and hand tool industries. A variety of industrial settings was chosen to illustrate transferability. The case study applications indicate that the process framework is a valid approach to the problem; however, further testing is required. In particular, the use of group support system technologies to support the process and the steps involving the participation of software vendors need further testing. However, the process framework can be followed due to the clarity of its presentation. It considers the issue of timing by including alternative decision-making techniques, dependent on the constraints. It is useful for ensuring a sound business case is developed, with supporting documentation and analysis that identifies the strategic and functional requirements of supply system architecture.
Resumo:
Requirements are sensitive to the context in which the system-to-be must operate. Where such context is well-understood and is static or evolves slowly, existing RE techniques can be made to work well. Increasingly, however, development projects are being challenged to build systems to operate in contexts that are volatile over short periods in ways that are imperfectly understood. Such systems need to be able to adapt to new environmental contexts dynamically, but the contextual uncertainty that demands this self-adaptive ability makes it hard to formulate, validate and manage their requirements. Different contexts may demand different requirements trade-offs. Unanticipated contexts may even lead to entirely new requirements. To help counter this uncertainty, we argue that requirements for self-adaptive systems should be run-time entities that can be reasoned over in order to understand the extent to which they are being satisfied and to support adaptation decisions that can take advantage of the systems' self-adaptive machinery. We take our inspiration from the fact that explicit, abstract representations of software architectures used to be considered design-time-only entities but computational reflection showed that architectural concerns could be represented at run-time too, helping systems to dynamically reconfigure themselves according to changing context. We propose to use analogous mechanisms to achieve requirements reflection. In this paper we discuss the ideas that support requirements reflection as a means to articulate some of the outstanding research challenges.
Resumo:
The complexity of environments faced by dynamically adaptive systems (DAS) means that the RE process will often be iterative with analysts revisiting the system specifications based on new environmental understanding product of experiences with experimental deployments, or even after final deployments. An ability to trace backwards to an identified environmental assumption, and to trace forwards to find the areas of a DAS's specification that are affected by changes in environmental understanding aids in supporting this necessarily iterative RE process. This paper demonstrates how claims can be used as markers for areas of uncertainty in a DAS specification. The paper demonstrates backward tracing using claims to identify faulty environmental understanding, and forward tracing to allow generation of new behaviour in the form of policy adaptations and models for transitioning the running system. © 2011 ACM.
Resumo:
Bayesian decision theory is increasingly applied to support decision-making processes under environmental variability and uncertainty. Researchers from application areas like psychology and biomedicine have applied these techniques successfully. However, in the area of software engineering and speci?cally in the area of self-adaptive systems (SASs), little progress has been made in the application of Bayesian decision theory. We believe that techniques based on Bayesian Networks (BNs) are useful for systems that dynamically adapt themselves at runtime to a changing environment, which is usually uncertain. In this paper, we discuss the case for the use of BNs, speci?cally Dynamic Decision Networks (DDNs), to support the decision-making of self-adaptive systems. We present how such a probabilistic model can be used to support the decision making in SASs and justify its applicability. We have applied our DDN-based approach to the case of an adaptive remote data mirroring system. We discuss results, implications and potential bene?ts of the DDN to enhance the development and operation of self-adaptive systems, by providing mechanisms to cope with uncertainty and automatically make the best decision.
Resumo:
We argue that, for certain constrained domains, elaborate model transformation technologies-implemented from scratch in general-purpose programming languages-are unnecessary for model-driven engineering; instead, lightweight configuration of commercial off-the-shelf productivity tools suffices. In particular, in the CancerGrid project, we have been developing model-driven techniques for the generation of software tools to support clinical trials. A domain metamodel captures the community's best practice in trial design. A scientist authors a trial protocol, modelling their trial by instantiating the metamodel; customized software artifacts to support trial execution are generated automatically from the scientist's model. The metamodel is expressed as an XML Schema, in such a way that it can be instantiated by completing a form to generate a conformant XML document. The same process works at a second level for trial execution: among the artifacts generated from the protocol are models of the data to be collected, and the clinician conducting the trial instantiates such models in reporting observations-again by completing a form to create a conformant XML document, representing the data gathered during that observation. Simple standard form management tools are all that is needed. Our approach is applicable to a wide variety of information-modelling domains: not just clinical trials, but also electronic public sector computing, customer relationship management, document workflow, and so on. © 2012 Springer-Verlag.
Resumo:
现有区分服务网络的保证转发服务可提供稳定的带宽保证,但缺乏保证时延和分组丢失性能的有效方案.基于对RIO队列的稳态性能分析,提出两种自适应调整控制策略的主动队列管理算法(ARIO-D和ARIO-L).仿真结果表明,这两种算法在保持RIO算法带宽保证能力的同时,还可以提供稳定的和可区分的时延和分组丢失性能.采用ARIO-D和ARIO-L的保证转发服务可以为多媒体流量提供多种服务质量的定量保证. Current assured forwarding (AF) service in differentiated services (DiffServ) networks can provide stable guarantees in throughput, but is lacking of efficient schemes in ensuring queuing delay and loss ratio. By analyzing the steady state operating point of RIO, this paper proposes two active queue management algorithms with adaptive control policy, namely ARIO-D and ARIO-L. These two algorithms can provide differentiated performance in, respectively, queuing delay and loss ratio, in addition to throughput guarantee. By deploying ARIO-D and ARIO-L, AF service can provide quantitative guarantees for multimedia traffic with multiple QoS metrics.
Resumo:
Developers of interactive software are confronted by an increasing variety of software tools to help engineer the interactive aspects of software applications. Typically resorting to ad hoc means of tool selection, developers are often dissatisfied with their chosen tool on account of the fact that the tool lacks required functionality or does not fit seamlessly within the context in which it is to be used. This paper describes a system for evaluating the suitability of user interface development tools for use in software development organisations and projects such that the selected tool appears ‘invisible’ within its anticipated context of use. The paper also outlines and presents the results of an informal empirical study and a series of observational case studies of the system.
Resumo:
The growth of the discipline of translation studies has been accompanied by are newed reflection on the object of research and our metalanguage. These developments have also been necessitated by the diversification of professions within the language industry. The very label translation is often avoided in favour of alternative terms, such as localisation (of software), trans creation (of advertising), trans editing (of information from press agencies). The competences framework developed for the European Master’s in Translation network speaks of experts in multilingual and multimedia communication to account for the complexity of translation competence. This paper addresses the following related questions: (i) How can translation competence in such awide sense be developed in training programmes? (ii) Do some competences required in the industry go beyond translation competence? and (iii) What challenges do labels such as trans creation pose?