969 resultados para BioArray Software Environment
Resumo:
Well understood methods exist for developing programs from given specifications. A formal method identifies proof obligations at each development step: if all such proof obligations are discharged, a precisely defined class of errors can be excluded from the final program. For a class of closed systems such methods offer a gold standard against which less formal approaches can be measured. For open systems -those which interact with the physical world- the task of obtaining the program specification can be as challenging as the task of deriving the program. And, when a system of this class must tolerate certain kinds of unreliability in the physical world, it is still more challenging to reach confidence that the specification obtained is adequate. We argue that widening the notion of software development to include specifying the behaviour of the relevant parts of the physical world gives a way to derive the specification of a control system and also to record precisely the assumptions being made about the world outside the computer.
Resumo:
Arguably, the world has become one large pervasive computing environment. Our planet is growing a digital skin of a wide array of sensors, hand-held computers, mobile phones, laptops, web services and publicly accessible web-cams. Often, these devices and services are deployed in groups, forming small communities of interacting devices. Service discovery protocols allow processes executing on each device to discover services offered by other devices within the community. These communities can be linked together to form a wide-area pervasive environment, allowing processes in one p u p tu interact with services in another. However, the costs of communication and the protocols by which this communication is mediated in the wide-area differ from those of intra-group, or local-area, communication. Communication is an expensive operation for small, battery powered devices, but it is less expensive for servem and workstations, which have a constant power supply and 81'e connected to high bandwidth networks. This paper introduces Superstring, a peer to-peer service discovery protocol optimised fur use in the wide-area. Its goals are to minimise computation and memory overhead in the face of large numbers of resources. It achieves this memory and computation scalability by distributing the storage cost of service descriptions and the computation cost of queries over multiple resolvers.
Resumo:
Effective comprehension of complex software systems requires understanding of both the individual documents that represent software and the complex relationships that exist within and between documents. Relationships of all kinds play a vital role in a software engineer's comprehension of, and navigation within and between, software documents. User-determined relationships have the additional role of enabling the engineer to create and maintain relational documentation that cannot be generated by tools or derived from other relationships. We argue that for a software development environment to effectively support the understanding of complex software systems, relational navigation must be supported at both the document-focused (intra-document) and relation-focused (inter-document) levels. The need for a relation-focused approach is highlighted by an evaluation of an existing document-focused relational interface. We conclude with the requirements for a relation-focused approach to relational navigation. These requirements focus on the user's perspective when interacting with a collection of related documents. We define the requirements for a software development environment that effectively supports the understanding of the software documents and relationships that define a complex software system.
Resumo:
The developments of models in Earth Sciences, e.g. for earthquake prediction and for the simulation of mantel convection, are fare from being finalized. Therefore there is a need for a modelling environment that allows scientist to implement and test new models in an easy but flexible way. After been verified, the models should be easy to apply within its scope, typically by setting input parameters through a GUI or web services. It should be possible to link certain parameters to external data sources, such as databases and other simulation codes. Moreover, as typically large-scale meshes have to be used to achieve appropriate resolutions, the computational efficiency of the underlying numerical methods is important. Conceptional this leads to a software system with three major layers: the application layer, the mathematical layer, and the numerical algorithm layer. The latter is implemented as a C/C++ library to solve a basic, computational intensive linear problem, such as a linear partial differential equation. The mathematical layer allows the model developer to define his model and to implement high level solution algorithms (e.g. Newton-Raphson scheme, Crank-Nicholson scheme) or choose these algorithms form an algorithm library. The kernels of the model are generic, typically linear, solvers provided through the numerical algorithm layer. Finally, to provide an easy-to-use application environment, a web interface is (semi-automatically) built to edit the XML input file for the modelling code. In the talk, we will discuss the advantages and disadvantages of this concept in more details. We will also present the modelling environment escript which is a prototype implementation toward such a software system in Python (see www.python.org). Key components of escript are the Data class and the PDE class. Objects of the Data class allow generating, holding, accessing, and manipulating data, in such a way that the actual, in the particular context best, representation is transparent to the user. They are also the key to establish connections with external data sources. PDE class objects are describing (linear) partial differential equation objects to be solved by a numerical library. The current implementation of escript has been linked to the finite element code Finley to solve general linear partial differential equations. We will give a few simple examples which will illustrate the usage escript. Moreover, we show the usage of escript together with Finley for the modelling of interacting fault systems and for the simulation of mantel convection.
Resumo:
Este trabalho tem como objetivo investigar na prática organizacional as articulações e desarticulações entre as práticas de treinamento, vistas como mecanismo que disciplina o indivíduo (normas e padronização de procedimentos de trabalho), e as diretrizes de práticas de treinamento, vistas como mecanismo de regulação (normas, padrões de conduta), das relações entre a organização e os funcionários. A pesquisa permite trazer para o ambiente acadêmico científico um aprofundamento quanto aos mecanismos de poder disciplinar exercidos pela organização estudada sobre os funcionários e os mecanismos de regulação sobre o mesmo fenômeno práticas de treinamento , além de explorar a questão da distinção e complementaridade entre ambos. Foi utilizada pesquisa qualitativa e uma proposta teóricometodológica de análise de conteúdo e análise de discurso, a partir de uma amostra indicativa e intencional, sendo a empresa do ramo de desenvolvimento e comércio de software de recursos humanos. Utilizou-se de três instrumentos para a coleta de dados: a entrevista em profundidade, observações do pesquisador e pesquisa documental. As análises feitas entre a teoria e os dados empíricos comprovaram a existência de articulações e desarticulações entre os mecanismos, além de outros achados, confirmados através da norma de treinamento e dos depoimentos dos entrevistados.
Resumo:
In recent years, UK industry has seen an explosive growth in the number of `Computer Aided Production Management' (CAPM) system installations. Of the many CAPM systems, materials requirement planning/manufacturing resource planning (MRP/MRPII) is the most widely implemented. Despite the huge investments in MRP systems, over 80 percent are said to have failed within 3 to 5 years of installation. Many people now assume that Just-In-Time (JIT) is the best manufacturing technique. However, those who have implemented JIT have found that it also has many problems. The author argues that the success of a manufacturing company will not be due to a system which complies with a single technique; but due to the integration of many techniques and the ability to make them complement each other in a specific manufacturing environment. This dissertation examines the potential for integrating MRP with JIT and Two-Bin systems to reduce operational costs involved in managing bought-out inventory. Within this framework it shows that controlling MRP is essential to facilitate the integrating process. The behaviour of MRP systems is dependent on the complex interactions between the numerous control parameters used. Methodologies/models are developed to set these parameters. The models are based on the Pareto principle. The idea is to use business targets to set a coherent set of parameters, which not only enables those business targets to be realised, but also facilitates JIT implementation. It illustrates this approach in the context of an actual manufacturing plant - IBM Havant. (IBM Havant is a high volume electronics assembly plant with the majority of the materials bought-out). The parameter setting models are applicable to control bought-out items in a wide range of industries and are not dependent on specific MRP software. The models have produced successful results in several companies and are now being developed as commercial products.
Resumo:
The work described was carried out as part of a collaborative Alvey software engineering project (project number SE057). The project collaborators were the Inter-Disciplinary Higher Degrees Scheme of the University of Aston in Birmingham, BIS Applied Systems Ltd. (BIS) and the British Steel Corporation. The aim of the project was to investigate the potential application of knowledge-based systems (KBSs) to the design of commercial data processing (DP) systems. The work was primarily concerned with BIS's Structured Systems Design (SSD) methodology for DP systems development and how users of this methodology could be supported using KBS tools. The problems encountered by users of SSD are discussed and potential forms of computer-based support for inexpert designers are identified. The architecture for a support environment for SSD is proposed based on the integration of KBS and non-KBS tools for individual design tasks within SSD - The Intellipse system. The Intellipse system has two modes of operation - Advisor and Designer. The design, implementation and user-evaluation of Advisor are discussed. The results of a Designer feasibility study, the aim of which was to analyse major design tasks in SSD to assess their suitability for KBS support, are reported. The potential role of KBS tools in the domain of database design is discussed. The project involved extensive knowledge engineering sessions with expert DP systems designers. Some practical lessons in relation to KBS development are derived from this experience. The nature of the expertise possessed by expert designers is discussed. The need for operational KBSs to be built to the same standards as other commercial and industrial software is identified. A comparison between current KBS and conventional DP systems development is made. On the basis of this analysis, a structured development method for KBSs in proposed - the POLITE model. Some initial results of applying this method to KBS development are discussed. Several areas for further research and development are identified.
Resumo:
Component-based development (CBD) has become an important emerging topic in the software engineering field. It promises long-sought-after benefits such as increased software reuse, reduced development time to market and, hence, reduced software production cost. Despite the huge potential, the lack of reasoning support and development environment of component modeling and verification may hinder its development. Methods and tools that can support component model analysis are highly appreciated by industry. Such a tool support should be fully automated as well as efficient. At the same time, the reasoning tool should scale up well as it may need to handle hundreds or even thousands of components that a modern software system may have. Furthermore, a distributed environment that can effectively manage and compose components is also desirable. In this paper, we present an approach to the modeling and verification of a newly proposed component model using Semantic Web languages and their reasoning tools. We use the Web Ontology Language and the Semantic Web Rule Language to precisely capture the inter-relationships and constraints among the entities in a component model. Semantic Web reasoning tools are deployed to perform automated analysis support of the component models. Moreover, we also proposed a service-oriented architecture (SOA)-based semantic web environment for CBD. The adoption of Semantic Web services and SOA make our component environment more reusable, scalable, dynamic and adaptive.
Resumo:
The success of the Semantic Web, as the next generation of Web technology, can have profound impact on the environment for formal software development. It allows both the software engineers and machines to understand the content of formal models and supports more effective software design in terms of understanding, sharing and reusing in a distributed manner. To realise the full potential of the Semantic Web in formal software development, effectively creating proper semantic metadata for formal software models and their related software artefacts is crucial. In this paper, a methodology with tool support is proposed to automatically derive ontological metadata from formal software models and semantically describe them.
Resumo:
Increasingly software systems are required to survive variations in their execution environment without or with only little human intervention. Such systems are called "eternal software systems". In contrast to the traditional view of development and execution as separate cycles, these modern software systems should not present such a separation. Research in MDE has been primarily concerned with the use of models during the first cycle or development (i.e. during the design, implementation, and deployment) and has shown excellent results. In this paper the author argues that an eternal software system must have a first-class representation of itself available to enable change. These runtime representations (or runtime models) will depend on the kind of dynamic changes that we want to make available during execution or on the kind of analysis we want the system to support. Hence, different models can be conceived. Self-representation inevitably implies the use of reflection. In this paper the author briefly summarizes research that supports the use of runtime models, and points out different issues and research questions. © 2009 IEEE.