940 resultados para Formal specification


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Formalizing algorithm derivations is a necessary prerequisite for developing automated algorithm design systems. This report describes a derivation of an algorithm for incrementally matching conjunctive patterns against a growing database. This algorithm, which is modeled on the Rete matcher used in the OPS5 production system, forms a basis for efficiently implementing a rule system. The highlights of this derivation are: (1) a formal specification for the rule system matching problem, (2) derivation of an algorithm for this task using a lattice-theoretic model of conjunctive and disjunctive variable substitutions, and (3) optimization of this algorithm, using finite differencing, for incrementally processing new data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We describe a compositional framework, together with its supporting toolset, for hardware/software co-design. Our framework is an integration of a formal approach within a traditional design flow. The formal approach is based on Interval Temporal Logic and its executable subset, Tempura. Refinement is the key element in our framework because it will derive from a single formal specification of the system the software and hardware parts of the implementation, while preserving all properties of the system specification. During refinement simulation is used to choose the appropriate refinement rules, which are applied automatically in the HOL system. The framework is illustrated with two case studies. The work presented is part of a UK collaborative research project between the Software Technology Research Laboratory at the De Montfort University and the Oxford University Computing Laboratory.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Architectures based on Coordinated Atomic action (CA action) concepts have been used to build concurrent fault-tolerant systems. This conceptual model combines concurrent exception handling with action nesting to provide a general mechanism for both enclosing interactions among system components and coordinating forward error recovery measures. This article presents an architectural model to guide the formal specification of concurrent fault-tolerant systems. This architecture provides built-in Communicating Sequential Processes (CSPs) and predefined channels to coordinate exception handling of the user-defined components. Hence some safety properties concerning action scoping and concurrent exception handling can be proved by using the FDR (Failure Divergence Refinement) verification tool. As a result, a formal and general architecture supporting software fault tolerance is ready to be used and proved as users define components with normal and exceptional behaviors. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

MAIDL, André Murbach; CARVILHE, Claudio; MUSICANTE, Martin A. Maude Object-Oriented Action Tool. Electronic Notes in Theoretical Computer Science. [S.l:s.n], 2008.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The monitoring of patients performed in hospitals is usually done either in a manual or semiautomated way, where the members of the healthcare team must constantly visit the patients to ascertain the health condition in which they are. The adoption of this procedure, however, compromises the quality of the monitoring conducted since the shortage of physical and human resources in hospitals tends to overwhelm members of the healthcare team, preventing them from moving to patients with adequate frequency. Given this, many existing works in the literature specify alternatives aimed at improving this monitoring through the use of wireless networks. In these works, the network is only intended for data traffic generated by medical sensors and there is no possibility of it being allocated for the transmission of data from applications present in existing user stations in the hospital. However, in the case of hospital automation environments, this aspect is a negative point, considering that the data generated in such applications can be directly related to the patient monitoring conducted. Thus, this thesis defines Wi-Bio as a communication protocol aimed at the establishment of IEEE 802.11 networks for patient monitoring, capable of enabling the harmonious coexistence among the traffic generated by medical sensors and user stations. The formal specification and verification of Wi-Bio were made through the design and analysis of Petri net models. Its validation was performed through simulations with the Network Simulator 2 (NS2) tool. The simulations of NS2 were designed to portray a real patient monitoring environment corresponding to a floor of the nursing wards sector of the University Hospital Onofre Lopes (HUOL), located at Natal, Rio Grande do Norte. Moreover, in order to verify the feasibility of Wi-Bio in terms of wireless networks standards prevailing in the market, the testing scenario was also simulated under a perspective in which the network elements used the HCCA access mechanism described in the IEEE 802.11e amendment. The results confirmed the validity of the designed Petri nets and showed that Wi-Bio, in addition to presenting a superior performance compared to HCCA on most items analyzed, was also able to promote efficient integration between the data generated by medical sensors and user applications on the same wireless network

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With the increasing complexity of software systems, there is also an increased concern about its faults. These faults can cause financial losses and even loss of life. Therefore, we propose in this paper the minimization of faults in software by using formally specified tests. The combination of testing and formal specifications is gaining strength in searches mainly through the MBT (Model-Based Testing). The development of software from formal specifications, when the whole process of refinement is done rigorously, ensures that what is specified in the application will be implemented. Thus, the implementation generated from these specifications would accurately depict what was specified. But not always the specification is refined to the level of implementation and code generation, and in these cases the tests generated from the specification tend to find fault. Additionally, the generation of so-called "invalid tests", ie tests that exercise the application scenarios that were not addressed in the specification, complements more significantly the formal development process. Therefore, this paper proposes a method for generating tests from B formal specifications. This method was structured in pseudo-code. The method is based on the systematization of the techniques of black box testing of boundary value analysis, equivalence partitioning, as well as the technique of orthogonal pairs. The method was applied to a B specification and B test machines that generate test cases independent of implementation language were generated. Aiming to validate the method, test cases were transformed manually in JUnit test cases and the application, created from the B specification and developed in Java, was tested. Faults were found with the execution of the JUnit test cases

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a contribution to the international Verified Software Repository effort through the formal specification of the microkernel FreeRTOS real-time system. Such specification was made in abstract level making use of the B method . For thus, properties of the microkernel were chosen and selected as specification requisites, which was constructed centered at the functionalities responsible for the utilization of these properties. This properties weres setting as specification requirements. The specification was constructed modeling the function of microkernel that implement this properties. This work intended to encourage the formal verification of FreeRTOS and also contribute to the formal creation of a microkernel real-time systems, based in FreeRTOS. Furthermore, this model brings a formal documentation point view of the microkernel, demonstrating features and how this internal states is changing. Finally, this work could be an example of specification of the actual system by the B method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The development of smart card applications requires a high level of reliability. Formal methods provide means for this reliability to be achieved. The BSmart method and tool contribute to the development of smart card applications with the support of the B method, generating Java Card code from B specifications. For the development with BSmart to be effectively rigorous without overloading the user it is important to have a library of reusable components built in B. The goal of KitSmart is to provide this support. A first research about the composition of this library was a graduation work from Universidade Federal do Rio Grande do Norte, made by Thiago Dutra in 2006. This first version of the kit resulted in a specification of Java Card primitive types byte, short and boolean in B and the creation of reusable components for application development. This work provides an improvement of KitSmart with the addition of API Java Card specification made in B and a guide for the creation of new components. The API Java Card in B, besides being available to be used for development of applications, is also useful as a documentation of each API class. The reusable components correspond to modules to manipulate specific structures, such as date and time. These structures are not available for B or Java Card. These components for Java Card are generated from specifications formally verified in B. The guide contains quick reference on how to specify some structures and how some situations were adapted from object-orientation to the B Method. This work was evaluated through a case study made through the BSmart tool, that makes use of the KitSmart library. In this case study, it is possible to see the contribution of the components in a B specification. This kit should be useful for B method users and Java Card application developers

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Removing inconsistencies in a project is a less expensive activity when done in the early steps of design. The use of formal methods improves the understanding of systems. They have various techniques such as formal specification and verification to identify these problems in the initial stages of a project. However, the transformation from a formal specification into a programming language is a non-trivial task and error prone, specially when done manually. The aid of tools at this stage can bring great benefits to the final product to be developed. This paper proposes the extension of a tool whose focus is the automatic translation of specifications written in CSPM into Handel-C. CSP is a formal description language suitable for concurrent systems, and CSPM is the notation used in tools support. Handel-C is a programming language whose result can be compiled directly into FPGA s. Our extension increases the number of CSPM operators accepted by the tool, allowing the user to define local processes, to rename channels in a process and to use Boolean guards on external choices. In addition, we also propose the implementation of a communication protocol that eliminates some restrictions on parallel composition of processes in the translation into Handel-C, allowing communication in a same channel between multiple processes to be mapped in a consistent manner and that improper communication in a channel does not ocurr in the generated code, ie, communications that are not allowed in the system specification

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Die protokollbasierte Medizin stellt einen interdisziplinären Brennpunkt der Informatik dar. Als besonderer Ausschnitt der medizinischen Teilgebiete erlaubt sie die relativ formale Spezifikation von Prozessen in den drei Bereichen der Prävention, Diagnose und Therapie.Letzterer wurde immer besonders fokussiert und gilt seit jeher im Rahmen klinischer Studien als Projektionsfläche für informationstechnologische Konzepte. Die Euphorie der frühen Jahre ernüchtert sich jedoch bei jeder Bilanz. Nur sehr wenige der unzählbaren Projekte haben ihre Routine in der alltäglichen Praxis gefunden. Die meisten Vorhaben sind an der Illusion der vollständigen Berechenbarkeit medizinischer Arbeitsabläufe gescheitert. Die traditionelle Sichtweise der klinischen Praxis beruht auf einer blockorientierten Vorstellung des Therapieausführungsprozesses. Sie entsteht durch seine Zerlegung in einzelne Therapiezweige, welche aus vordefinierten Blöcken zusammengesetzt sind. Diese können sequentiell oder parallel ausgeführt werden und sind selbst zusammengesetzt aus jeweils einer Menge von Elementen,welche die Aktivitäten der untersten Ebene darstellen. Das blockorientierte Aufbaumodell wird ergänzt durch ein regelorientiertes Ablaufmodell. Ein komplexes Regelwerk bestimmt Bedingungen für die zeitlichen und logischen Abhängigkeiten der Blöcke, deren Anordnung durch den Ausführungsprozeß gebildet wird. Die Modellierung der Therapieausführung steht zunächst vor der grundsätzlichen Frage, inwieweit die traditionelle Sichtweise für eine interne Repräsentation geeignet ist. Das übergeordnete Ziel besteht in der Integration der unterschiedlichen Ebenen der Therapiespezifikation. Dazu gehört nicht nur die strukturelle Komponente, sondern vorallem die Ablaufkomponente. Ein geeignetes Regelmodell ist erforderlich, welches den spezifischen Bedürfnissen der Therapieüberwachung gerecht wird. Die zentrale Aufgabe besteht darin, diese unterschiedlichen Ebenen zusammenzuführen. Eine sinnvolle Alternative zur traditionellen Sichtweise liefert das zustandsorientierte Modell des Therapieausführungsprozesses. Das zustandsorientierte Modell beruht auf der Sichtweise, daß der gesamte Therapieausführungsprozeß letztendlich eine lineare Folge von Zuständen beschreibt, wobei jeder Zustandsübergang durch ein Ereignis eingeleitet wird, an bestimmte Bedingungen geknüpft ist und bestimmte Aktionen auslösen kann. Die Parallelität des blockorientierten Modells tritt in den Hintergrund, denn die Menge der durchzuführenden Maßnahmen sind lediglich Eigenschaften der Zustände und keine strukturellen Elemente der Ablaufspezifikation. Zu jedem Zeitpunkt ist genau ein Zustand aktiv, und er repräsentiert eine von endlich vielen klinischen Situationen, mit all ihren spezifischen Aktivitäten und Ausführungsregeln. Die Vorteile des zustandsorientierten Modells liegen in der Integration. Die Grundstruktur verbindet die statische Darstellung der möglichen Phasenanordnungen mit der dynamischen Ausführung aktiver Regeln. Die ursprünglichen Inhalte des blockorientierten Modells werden als gewöhnliche Eigenschaften der Zustände reproduziert und stellen damit nur einen Spezialfall der zustandsbezogenen Sicht dar.Weitere Möglichkeiten für die Anreicherung der Zustände mit zusätzlichen Details sind denkbar wie sinnvoll. Die Grundstruktur bleibt bei jeder Erweiterung jedoch die gleiche. Es ergibt sich ein wiederverwendbares Grundgerüst,ein gemeinsamer Nenner für die Erfüllung der Überwachungsaufgabe.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La creciente complejidad, heterogeneidad y dinamismo inherente a las redes de telecomunicaciones, los sistemas distribuidos y los servicios avanzados de información y comunicación emergentes, así como el incremento de su criticidad e importancia estratégica, requieren la adopción de tecnologías cada vez más sofisticadas para su gestión, su coordinación y su integración por parte de los operadores de red, los proveedores de servicio y las empresas, como usuarios finales de los mismos, con el fin de garantizar niveles adecuados de funcionalidad, rendimiento y fiabilidad. Las estrategias de gestión adoptadas tradicionalmente adolecen de seguir modelos excesivamente estáticos y centralizados, con un elevado componente de supervisión y difícilmente escalables. La acuciante necesidad por flexibilizar esta gestión y hacerla a la vez más escalable y robusta, ha provocado en los últimos años un considerable interés por desarrollar nuevos paradigmas basados en modelos jerárquicos y distribuidos, como evolución natural de los primeros modelos jerárquicos débilmente distribuidos que sucedieron al paradigma centralizado. Se crean así nuevos modelos como son los basados en Gestión por Delegación, en el paradigma de código móvil, en las tecnologías de objetos distribuidos y en los servicios web. Estas alternativas se han mostrado enormemente robustas, flexibles y escalables frente a las estrategias tradicionales de gestión, pero continúan sin resolver aún muchos problemas. Las líneas actuales de investigación parten del hecho de que muchos problemas de robustez, escalabilidad y flexibilidad continúan sin ser resueltos por el paradigma jerárquico-distribuido, y abogan por la migración hacia un paradigma cooperativo fuertemente distribuido. Estas líneas tienen su germen en la Inteligencia Artificial Distribuida (DAI) y, más concretamente, en el paradigma de agentes autónomos y en los Sistemas Multi-agente (MAS). Todas ellas se perfilan en torno a un conjunto de objetivos que pueden resumirse en alcanzar un mayor grado de autonomía en la funcionalidad de la gestión y una mayor capacidad de autoconfiguración que resuelva los problemas de escalabilidad y la necesidad de supervisión presentes en los sistemas actuales, evolucionar hacia técnicas de control fuertemente distribuido y cooperativo guiado por la meta y dotar de una mayor riqueza semántica a los modelos de información. Cada vez más investigadores están empezando a utilizar agentes para la gestión de redes y sistemas distribuidos. Sin embargo, los límites establecidos en sus trabajos entre agentes móviles (que siguen el paradigma de código móvil) y agentes autónomos (que realmente siguen el paradigma cooperativo) resultan difusos. Muchos de estos trabajos se centran en la utilización de agentes móviles, lo cual, al igual que ocurría con las técnicas de código móvil comentadas anteriormente, les permite dotar de un mayor componente dinámico al concepto tradicional de Gestión por Delegación. Con ello se consigue flexibilizar la gestión, distribuir la lógica de gestión cerca de los datos y distribuir el control. Sin embargo se permanece en el paradigma jerárquico distribuido. Si bien continúa sin definirse aún una arquitectura de gestión fiel al paradigma cooperativo fuertemente distribuido, estas líneas de investigación han puesto de manifiesto serios problemas de adecuación en los modelos de información, comunicación y organizativo de las arquitecturas de gestión existentes. En este contexto, la tesis presenta un modelo de arquitectura para gestión holónica de sistemas y servicios distribuidos mediante sociedades de agentes autónomos, cuyos objetivos fundamentales son el incremento del grado de automatización asociado a las tareas de gestión, el aumento de la escalabilidad de las soluciones de gestión, soporte para delegación tanto por dominios como por macro-tareas, y un alto grado de interoperabilidad en entornos abiertos. A partir de estos objetivos se ha desarrollado un modelo de información formal de tipo semántico, basado en lógica descriptiva que permite un mayor grado de automatización en la gestión en base a la utilización de agentes autónomos racionales, capaces de razonar, inferir e integrar de forma dinámica conocimiento y servicios conceptualizados mediante el modelo CIM y formalizados a nivel semántico mediante lógica descriptiva. El modelo de información incluye además un “mapping” a nivel de meta-modelo de CIM al lenguaje de especificación de ontologías OWL, que supone un significativo avance en el área de la representación y el intercambio basado en XML de modelos y meta-información. A nivel de interacción, el modelo aporta un lenguaje de especificación formal de conversaciones entre agentes basado en la teoría de actos ilocucionales y aporta una semántica operacional para dicho lenguaje que facilita la labor de verificación de propiedades formales asociadas al protocolo de interacción. Se ha desarrollado también un modelo de organización holónico y orientado a roles cuyas principales características están alineadas con las demandadas por los servicios distribuidos emergentes e incluyen la ausencia de control central, capacidades de reestructuración dinámica, capacidades de cooperación, y facilidades de adaptación a diferentes culturas organizativas. El modelo incluye un submodelo normativo adecuado al carácter autónomo de los holones de gestión y basado en las lógicas modales deontológica y de acción.---ABSTRACT---The growing complexity, heterogeneity and dynamism inherent in telecommunications networks, distributed systems and the emerging advanced information and communication services, as well as their increased criticality and strategic importance, calls for the adoption of increasingly more sophisticated technologies for their management, coordination and integration by network operators, service providers and end-user companies to assure adequate levels of functionality, performance and reliability. The management strategies adopted traditionally follow models that are too static and centralised, have a high supervision component and are difficult to scale. The pressing need to flexibilise management and, at the same time, make it more scalable and robust recently led to a lot of interest in developing new paradigms based on hierarchical and distributed models, as a natural evolution from the first weakly distributed hierarchical models that succeeded the centralised paradigm. Thus new models based on management by delegation, the mobile code paradigm, distributed objects and web services came into being. These alternatives have turned out to be enormously robust, flexible and scalable as compared with the traditional management strategies. However, many problems still remain to be solved. Current research lines assume that the distributed hierarchical paradigm has as yet failed to solve many of the problems related to robustness, scalability and flexibility and advocate migration towards a strongly distributed cooperative paradigm. These lines of research were spawned by Distributed Artificial Intelligence (DAI) and, specifically, the autonomous agent paradigm and Multi-Agent Systems (MAS). They all revolve around a series of objectives, which can be summarised as achieving greater management functionality autonomy and a greater self-configuration capability, which solves the problems of scalability and the need for supervision that plague current systems, evolving towards strongly distributed and goal-driven cooperative control techniques and semantically enhancing information models. More and more researchers are starting to use agents for network and distributed systems management. However, the boundaries established in their work between mobile agents (that follow the mobile code paradigm) and autonomous agents (that really follow the cooperative paradigm) are fuzzy. Many of these approximations focus on the use of mobile agents, which, as was the case with the above-mentioned mobile code techniques, means that they can inject more dynamism into the traditional concept of management by delegation. Accordingly, they are able to flexibilise management, distribute management logic about data and distribute control. However, they remain within the distributed hierarchical paradigm. While a management architecture faithful to the strongly distributed cooperative paradigm has yet to be defined, these lines of research have revealed that the information, communication and organisation models of existing management architectures are far from adequate. In this context, this dissertation presents an architectural model for the holonic management of distributed systems and services through autonomous agent societies. The main objectives of this model are to raise the level of management task automation, increase the scalability of management solutions, provide support for delegation by both domains and macro-tasks and achieve a high level of interoperability in open environments. Bearing in mind these objectives, a descriptive logic-based formal semantic information model has been developed, which increases management automation by using rational autonomous agents capable of reasoning, inferring and dynamically integrating knowledge and services conceptualised by means of the CIM model and formalised at the semantic level by means of descriptive logic. The information model also includes a mapping, at the CIM metamodel level, to the OWL ontology specification language, which amounts to a significant advance in the field of XML-based model and metainformation representation and exchange. At the interaction level, the model introduces a formal specification language (ACSL) of conversations between agents based on speech act theory and contributes an operational semantics for this language that eases the task of verifying formal properties associated with the interaction protocol. A role-oriented holonic organisational model has also been developed, whose main features meet the requirements demanded by emerging distributed services, including no centralised control, dynamic restructuring capabilities, cooperative skills and facilities for adaptation to different organisational cultures. The model includes a normative submodel adapted to management holon autonomy and based on the deontic and action modal logics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Los hipergrafos dirigidos se han empleado en problemas relacionados con lógica proposicional, bases de datos relacionales, linguística computacional y aprendizaje automático. Los hipergrafos dirigidos han sido también utilizados como alternativa a los grafos (bipartitos) dirigidos para facilitar el estudio de las interacciones entre componentes de sistemas complejos que no pueden ser fácilmente modelados usando exclusivamente relaciones binarias. En este contexto, este tipo de representación es conocida como hiper-redes. Un hipergrafo dirigido es una generalización de un grafo dirigido especialmente adecuado para la representación de relaciones de muchos a muchos. Mientras que una arista en un grafo dirigido define una relación entre dos de sus nodos, una hiperarista en un hipergrafo dirigido define una relación entre dos conjuntos de sus nodos. La conexión fuerte es una relación de equivalencia que divide el conjunto de nodos de un hipergrafo dirigido en particiones y cada partición define una clase de equivalencia conocida como componente fuertemente conexo. El estudio de los componentes fuertemente conexos de un hipergrafo dirigido puede ayudar a conseguir una mejor comprensión de la estructura de este tipo de hipergrafos cuando su tamaño es considerable. En el caso de grafo dirigidos, existen algoritmos muy eficientes para el cálculo de los componentes fuertemente conexos en grafos de gran tamaño. Gracias a estos algoritmos, se ha podido averiguar que la estructura de la WWW tiene forma de “pajarita”, donde más del 70% del los nodos están distribuidos en tres grandes conjuntos y uno de ellos es un componente fuertemente conexo. Este tipo de estructura ha sido también observada en redes complejas en otras áreas como la biología. Estudios de naturaleza similar no han podido ser realizados en hipergrafos dirigidos porque no existe algoritmos capaces de calcular los componentes fuertemente conexos de este tipo de hipergrafos. En esta tesis doctoral, hemos investigado como calcular los componentes fuertemente conexos de un hipergrafo dirigido. En concreto, hemos desarrollado dos algoritmos para este problema y hemos determinado que son correctos y cuál es su complejidad computacional. Ambos algoritmos han sido evaluados empíricamente para comparar sus tiempos de ejecución. Para la evaluación, hemos producido una selección de hipergrafos dirigidos generados de forma aleatoria inspirados en modelos muy conocidos de grafos aleatorios como Erdos-Renyi, Newman-Watts-Strogatz and Barabasi-Albert. Varias optimizaciones para ambos algoritmos han sido implementadas y analizadas en la tesis. En concreto, colapsar los componentes fuertemente conexos del grafo dirigido que se puede construir eliminando ciertas hiperaristas complejas del hipergrafo dirigido original, mejora notablemente los tiempos de ejecucion de los algoritmos para varios de los hipergrafos utilizados en la evaluación. Aparte de los ejemplos de aplicación mencionados anteriormente, los hipergrafos dirigidos han sido también empleados en el área de representación de conocimiento. En concreto, este tipo de hipergrafos se han usado para el cálculo de módulos de ontologías. Una ontología puede ser definida como un conjunto de axiomas que especifican formalmente un conjunto de símbolos y sus relaciones, mientras que un modulo puede ser entendido como un subconjunto de axiomas de la ontología que recoge todo el conocimiento que almacena la ontología sobre un conjunto especifico de símbolos y sus relaciones. En la tesis nos hemos centrado solamente en módulos que han sido calculados usando la técnica de localidad sintáctica. Debido a que las ontologías pueden ser muy grandes, el cálculo de módulos puede facilitar las tareas de re-utilización y mantenimiento de dichas ontologías. Sin embargo, analizar todos los posibles módulos de una ontología es, en general, muy costoso porque el numero de módulos crece de forma exponencial con respecto al número de símbolos y de axiomas de la ontología. Afortunadamente, los axiomas de una ontología pueden ser divididos en particiones conocidas como átomos. Cada átomo representa un conjunto máximo de axiomas que siempre aparecen juntos en un modulo. La decomposición atómica de una ontología es definida como un grafo dirigido de tal forma que cada nodo del grafo corresponde con un átomo y cada arista define una dependencia entre una pareja de átomos. En esta tesis introducimos el concepto de“axiom dependency hypergraph” que generaliza el concepto de descomposición atómica de una ontología. Un modulo en una ontología correspondería con un componente conexo en este tipo de hipergrafos y un átomo de una ontología con un componente fuertemente conexo. Hemos adaptado la implementación de nuestros algoritmos para que funcionen también con axiom dependency hypergraphs y poder de esa forma calcular los átomos de una ontología. Para demostrar la viabilidad de esta idea, hemos incorporado nuestros algoritmos en una aplicación que hemos desarrollado para la extracción de módulos y la descomposición atómica de ontologías. A la aplicación la hemos llamado HyS y hemos estudiado sus tiempos de ejecución usando una selección de ontologías muy conocidas del área biomédica, la mayoría disponibles en el portal de Internet NCBO. Los resultados de la evaluación muestran que los tiempos de ejecución de HyS son mucho mejores que las aplicaciones más rápidas conocidas. ABSTRACT Directed hypergraphs are an intuitive modelling formalism that have been used in problems related to propositional logic, relational databases, computational linguistic and machine learning. Directed hypergraphs are also presented as an alternative to directed (bipartite) graphs to facilitate the study of the interactions between components of complex systems that cannot naturally be modelled as binary relations. In this context, they are known as hyper-networks. A directed hypergraph is a generalization of a directed graph suitable for representing many-to-many relationships. While an edge in a directed graph defines a relation between two nodes of the graph, a hyperedge in a directed hypergraph defines a relation between two sets of nodes. Strong-connectivity is an equivalence relation that induces a partition of the set of nodes of a directed hypergraph into strongly-connected components. These components can be collapsed into single nodes. As result, the size of the original hypergraph can significantly be reduced if the strongly-connected components have many nodes. This approach might contribute to better understand how the nodes of a hypergraph are connected, in particular when the hypergraphs are large. In the case of directed graphs, there are efficient algorithms that can be used to compute the strongly-connected components of large graphs. For instance, it has been shown that the macroscopic structure of the World Wide Web can be represented as a “bow-tie” diagram where more than 70% of the nodes are distributed into three large sets and one of these sets is a large strongly-connected component. This particular structure has been also observed in complex networks in other fields such as, e.g., biology. Similar studies cannot be conducted in a directed hypergraph because there does not exist any algorithm for computing the strongly-connected components of the hypergraph. In this thesis, we investigate ways to compute the strongly-connected components of directed hypergraphs. We present two new algorithms and we show their correctness and computational complexity. One of these algorithms is inspired by Tarjan’s algorithm for directed graphs. The second algorithm follows a simple approach to compute the stronglyconnected components. This approach is based on the fact that two nodes of a graph that are strongly-connected can also reach the same nodes. In other words, the connected component of each node is the same. Both algorithms are empirically evaluated to compare their performances. To this end, we have produced a selection of random directed hypergraphs inspired by existent and well-known random graphs models like Erd˝os-Renyi and Newman-Watts-Strogatz. Besides the application examples that we mentioned earlier, directed hypergraphs have also been employed in the field of knowledge representation. In particular, they have been used to compute the modules of an ontology. An ontology is defined as a collection of axioms that provides a formal specification of a set of terms and their relationships; and a module is a subset of an ontology that completely captures the meaning of certain terms as defined in the ontology. In particular, we focus on the modules computed using the notion of syntactic locality. As ontologies can be very large, the computation of modules facilitates the reuse and maintenance of these ontologies. Analysing all modules of an ontology, however, is in general not feasible as the number of modules grows exponentially in the number of terms and axioms of the ontology. Nevertheless, the modules can succinctly be represented using the Atomic Decomposition of an ontology. Using this representation, an ontology can be partitioned into atoms, which are maximal sets of axioms that co-occur in every module. The Atomic Decomposition is then defined as a directed graph such that each node correspond to an atom and each edge represents a dependency relation between two atoms. In this thesis, we introduce the notion of an axiom dependency hypergraph which is a generalization of the atomic decomposition of an ontology. A module in the ontology corresponds to a connected component in the hypergraph, and the atoms of the ontology to the strongly-connected components. We apply our algorithms for directed hypergraphs to axiom dependency hypergraphs and in this manner, we compute the atoms of an ontology. To demonstrate the viability of this approach, we have implemented the algorithms in the application HyS which computes the modules of ontologies and calculate their atomic decomposition. In the thesis, we provide an experimental evaluation of HyS with a selection of large and prominent biomedical ontologies, most of which are available in the NCBO Bioportal. HyS outperforms state-of-the-art implementations in the tasks of extracting modules and computing the atomic decomposition of these ontologies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

While object-oriented programming offers great solutions for today's software developers, this success has created difficult problems in class documentation and testing. In Java, two tools provide assistance: Javadoc allows class interface documentation to be embedded as code comments and JUnit supports unit testing by providing assert constructs and a test framework. This paper describes JUnitDoc, an integration of Javadoc and JUnit, which provides better support for class documentation and testing. With JUnitDoc, test cases are embedded in Javadoc comments and used as both examples for documentation and test cases for quality assurance. JUnitDoc extracts the test cases for use in HTML files serving as class documentation and in JUnit drivers for class testing. To address the difficult problem of testing inheritance hierarchies, JUnitDoc provides a novel solution in the form of a parallel test hierarchy. A small controlled experiment compares the readability of JUnitDoc documentation to formal documentation written in Object-Z. Copyright (c) 2005 John Wiley & Sons, Ltd.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Real-time software systems are rarely developed once and left to run. They are subject to changes of requirements as the applications they support expand, and they commonly outlive the platforms they were designed to run on. A successful real-time system is duplicated and adapted to a variety of applications - it becomes a product line. Current methods for real-time software development are commonly based on low-level programming languages and involve considerable duplication of effort when a similar system is to be developed or the hardware platform changes. To provide more dependable, flexible and maintainable real-time systems at a lower cost what is needed is a platform-independent approach to real-time systems development. The development process is composed of two phases: a platform-independent phase, that defines the desired system behaviour and develops a platform-independent design and implementation, and a platform-dependent phase that maps the implementation onto the target platform. The last phase should be highly automated. For critical systems, assessing dependability is crucial. The partitioning into platform dependent and independent phases has to support verification of system properties through both phases.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Starting with a UML specification that captures the underlying functionality of some given Java-based concurrent system, we describe a systematic way to construct, from this specification, test sequences for validating an implementation of the system. The approach is to first extend the specification to create UML state machines that directly address those aspects of the system we wish to test. To be specific, the extended UML state machines can capture state information about the number of waiting threads or the number of threads blocked on a given object. Using the SAL model checker we can generate from the extended UML state machines sequences that cover all the various possibilities of events and states. These sequences can then be directly transformed into test sequences suitable for input into a testing tool such as ConAn. As an illustration, the methodology is applied to generate sequences for testing a Java implementation of the producer-consumer system. © 2005 IEEE