921 resultados para Requirements Engineering, Requirement Specification
Resumo:
The goal of the ontology requirements specification activity is to state why the ontology is being built, what its intended uses are, who the end users are, and which requirements the ontology should fulfill. This chapter presents detailed methodological guidelines for specifying ontology requirements efficiently. These guidelines will help ontology engineers to capture ontology requirements and produce the ontology requirements specification document (ORSD). The ORSD will play a key role during the ontology development process because it facilitates, among other activities, (1) the search and reuse of existing knowledge resources with the aim of reengineering them into ontologies, (2) the search and reuse of ontological resources (ontologies, ontology modules, ontology statements as well as ontology design patterns), and (3) the verification of the ontology along the ontology development.
Resumo:
Knowledge graphs and ontologies are closely related concepts in the field of knowledge representation. In recent years, knowledge graphs have gained increasing popularity and are serving as essential components in many knowledge engineering projects that view them as crucial to their success. The conceptual foundation of the knowledge graph is provided by ontologies. Ontology modeling is an iterative engineering process that consists of steps such as the elicitation and formalization of requirements, the development, testing, refactoring, and release of the ontology. The testing of the ontology is a crucial and occasionally overlooked step of the process due to the lack of integrated tools to support it. As a result of this gap in the state-of-the-art, the testing of the ontology is completed manually, which requires a considerable amount of time and effort from the ontology engineers. The lack of tool support is noticed in the requirement elicitation process as well. In this aspect, the rise in the adoption and accessibility of knowledge graphs allows for the development and use of automated tools to assist with the elicitation of requirements from such a complementary source of data. Therefore, this doctoral research is focused on developing methods and tools that support the requirement elicitation and testing steps of an ontology engineering process. To support the testing of the ontology, we have developed XDTesting, a web application that is integrated with the GitHub platform that serves as an ontology testing manager. Concurrently, to support the elicitation and documentation of competency questions, we have defined and implemented RevOnt, a method to extract competency questions from knowledge graphs. Both methods are evaluated through their implementation and the results are promising.
Resumo:
In this paper, a novel adaptive strategy to obtain technically justified fault-ride-through requirements for wind turbines (WTs) is proposed. The main objective is to promote an effective integration of wind turbines into power systems with still low penetration levels of wind power based on technical and economical considerations. The level of requirement imposed by the strategy is increased stepwise over time, depending on system characteristics and on wind power penetration level. The idea behind is to introduce stringent requirements only when they are technically needed for a reliable and secure power system operation. Voltage stability support and fault-ride-through requirements are considered in the strategy. Simulations are based on the Chilean transmission network, a midsize isolated power system with still low penetration levels of wind power. Simulations include fixed speed induction generators and doubly fed induction generators. The effects on power system stability of the wind power injections, integrated into the network by adopting the adaptive strategy, are compared with the effects that have the same installed capacity of wind power but only considering WTs able to fulfill stringent requirements (fault-ride-through capability and support voltage stability). Based on simulations and international experience, technically justified requirements for the Chilean case are proposed.
Resumo:
Dissertação para obtenção do Grau de Doutor em Engenharia Informática
Resumo:
Ohjelmistokehitys on monimutkainen prosessi. Yksi keskeisistä tekijöistä siinä on ohjelmistolle asetettavat vaatimukset. Näitä vaatimuksia on hyvin monenlaisia, ja eri tasoisia; toivotusta toiminnallisuudesta hyvinkin yksityiskohtaisiin vaatimuksiin. Näiden vaatimusten hallinta on myöskin hyvin monitahoista, vaikkakin se on kirjallisuudessa esitetty selkeänä prosessissa, joka on sarja toisistaan erottuviavaiheita. Työn painopiste oli näiden vaatimusten muutoksen ja valmiiseen ohjelmistoon kohdistuvan palautteen hallinnassa, ja kuinka vaatimustenhallintaohjelmisto voisi olla avuksi näissä prosesseissa. Vaatimustenhallintatyökalun käyttö ei sinällään ratkaise mitään ongelmia, mutta se suo puitteet parantaa vaatimusten hallitsemista. Työkalun käytöstä on muun muassa seuraavia etuja: vaatimusten keskitetty varastointi, käyttäjäoikeuksien määrittely koskien eri käyttäjiä ja heidän pääsyään näkemään tai muuttamaan tietoa, muutoksenhallintaprosessin hallinta, muutosten vaikutuksen analysointi ja jäljitettävyys ja pääsy tietoihin web-selaimella.
Resumo:
This bachelor’s thesis is a part of the research project realized in the summer 2011 in Lappeenranta University of Technology. The goal of the project was to create an automation concept for controlling an electrically excited synchronous motor. This thesis concentrates on the setup and requirements specification part of the concept. The setup consists of ABB AC500 as the PLC master device, DCS800 as an exciter and ACS800 as a frequency converter. The ACS800 frequency converter uses permanent magnet synchronous machine software to control the stator’s magnetic field, the DC drive handles the excitation and the AC500 PLC master controls the communication and functionality of the system. The requirements specification briefly explains the general over-view of the concept, the use and functionality of the PLC program and the requirements needed for the whole concept and the PLC program to work as intended.
Resumo:
Indian marine engineers are renowned for employment globally due to their knowledge, skill and reliability. This praiseworthy status has been achieved mainly due to the systematic training imparted to marine engineering cadets. However, in an era of advancing technology, marine engineering training has to remain dynamic to imbibe latest technology as well as to meet the demands of the shipping industry. New subjects of studies have to be included in the curriculum in a timely manner taking into consideration the industry requirements and best practices in shipping. Technical competence of marine engineers also has to be subjected to changes depending upon the needs of the ever growing and over regulated shipping industry. Besides. certain soft skills are to be developed and improved amongst the marine engineers in order to alter or amend the personality traits leading to their career success.If timely corrective action is taken. Indian marine engineers can be in still greater demand for employment in global maritime field. In order to enhance the employability of our mmine engineers by improving their quality, a study of marine engineers in general and class IV marine engineers in particular was conducted based on three distinct surveys, viz., survey among senior marine engineers, survey among employers of marine engineers and survey of class IV marine engineers themselves.The surveys have been planned and questionnaires have been designed to focus the study of marine engineer officer class IV from the point of view of the three distinct groups of maritime personnels. As a result of this, the strength and weakness of class IV marine engineers are identified with regard to their performance on board ships, acquisition of necessary technical skills. employability and career success. The criteria of essential qualities of a marine engineer are classified as academic, technical, social, psychological. physical, mental, emergency responsive, communicative and leadership, and have been assessed for a practicing marine engineer by statistical analysis of data collected from surveys. These are assessed for class IV marine engineers from the point of view of senior marine engineers and employers separately. The Endings are delineated and graphically depicted in this thesis.Besides. six pertinent personality traits of a marine engineer viz. self esteem. learning style. decision making. motivation. team work and listening self inventory have been subjected to study and their correlation with career success have been established wherever possible. This is carried out to develop a theoretical framework to understand what leads a marine engineer to his career attainment. This enables the author to estimate the personality strengths and weaknesses of a serving marine engineer and eventually to deduce possible corrective measures or modifications in marine engineering training in India.Maritime training is largely based on International Conventions on Standard of Training. Certification and Watch keeping for Seafarers 1995. its associated Code and Merchant Shipping (STCW for Seafarers) Rules 1998. Further, Maritime Education, Training and Assessment (META) Manual was subjected to a critical scrutiny and relevant Endings of thc surveys arc superimposed on the existing rule requirement and curriculum. Views of senior marine engineers and executives of various shipping companies are taken into account before arriving at the revision of syllabus of marine engineering courses. Modifications in the pattern of workshop and sea service for graduate mechanical engineering trainees are recommended. Desirable age brackets of junior engineers and chief engineers. use of Training and Assessment Record book (TAR Book) during training etc. have also been evaluated.As a result of the pedagogic introspection of the existing system of marine engineering training in India. in this thesis, a revised pattern of workshop training of six months duration for graduate mechanical engineers. revised pattern of sea service training of one year duration and modified now diagram incorporating the above have been arrived at. Effects of various personality traits on career success have been established along with certain findings for improvement of desirable personality traits of marine engineers.
Resumo:
This is a presentation for our year one INFO1008 course of Computational Systems. It covers the need for requirements capture and the difficulty of building a specification based on user information. We present UML Use Cases and Use Case diagrams as a way of capturing requirements from the users point of view in a semi-structured way.
Resumo:
During the development of system requirements, software system specifications are often inconsistent. Inconsistencies may arise for different reasons, for example, when multiple conflicting viewpoints are embodied in the specification, or when the specification itself is at a transient stage of evolution. These inconsistencies cannot always be resolved immediately. As a result, we argue that a formal framework for the analysis of evolving specifications should be able to tolerate inconsistency by allowing reasoning in the presence of inconsistency without trivialisation, and circumvent inconsistency by enabling impact analyses of potential changes to be carried out. This paper shows how clustered belief revision can help in this process. Clustered belief revision allows for the grouping of requirements with similar functionality into clusters and the assignment of priorities between them. By analysing the result of a cluster, an engineer can either choose to rectify problems in the specification or to postpone the changes until more information becomes available.
Resumo:
In the context of Software Engineering, web accessibility is gaining more room, establishing itself as an important quality attribute. This fact is due to initiatives of institutions such as the W3C (World Wide Web Consortium) and the introduction of norms and laws such as Section 508 that underlie the importance of developing accessible Web sites and applications. Despite these improvements, the lack of web accessibility is still a persistent problem, and could be related to the moment or phase in which this requirement is solved within the development process. From the moment when Web accessibility is generally regarded as a programming problem or treated when the application is already developed entirely. Thus, consider accessibility already during activities of analysis and requirements specification shows itself a strategy to facilitate project progress, avoiding rework in advanced phases of software development because of possible errors, or omissions in the elicitation. The objective of this research is to develop a method and a tool to support requirements elicitation of web accessibility. The strategy for the requirements elicitation of this method is grounded by the Goal-Oriented approach NFR Framework and the use of catalogs NFRs, created based on the guidelines contained in WCAG 2.0 (Web Content Accessibility Guideline) proposed by W3C
Resumo:
Aspects related to the users' cooperative work are not considered in the traditional approach of software engineering, since the user is viewed independently of his/her workplace environment or group, with the individual model generalized to the study of collective behavior of all users. This work proposes a process for software requirements to address issues involving cooperative work in information systems that provide distributed coordination in the users' actions and the communication among them occurs indirectly through the data entered while using the software. To achieve this goal, this research uses ergonomics, the 3C cooperation model, awareness and software engineering concepts. Action-research is used as a research methodology applied in three cycles during the development of a corporate workflow system in a technological research company. This article discusses the third cycle, which corresponds to the process that deals with the refinement of the cooperative work requirements with the software in actual use in the workplace, where the inclusion of a computer system changes the users' workplace, from the face to face interaction to the interaction mediated by the software. The results showed that the highest degree of users' awareness about their activities and other system users contribute to a decrease in their errors and in the inappropriate use of the system.
Resumo:
Percutaneous nephrolithotomy (PCNL) for the treatment of renal stones and other related renal diseases has proved its efficacy and has stood the test of time compared with open surgical methods and extracorporal shock wave lithotripsy. However, access to the collecting system of the kidney is not easy because the available intra-operative image modalities only provide a two dimensional view of the surgical scenario. With this lack of visual information, several punctures are often necessary which, increases the risk of renal bleeding, splanchnic, vascular or pulmonary injury, or damage to the collecting system which sometimes makes the continuation of the procedure impossible. In order to address this problem, this paper proposes a workflow for introduction of a stereotactic needle guidance system for PCNL procedures. An analysis of the imposed clinical requirements, and a instrument guidance approach to provide the physician with a more intuitive planning and visual guidance to access the collecting system of the kidney are presented.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
In this paper we want to point out, by means of a case study, the importance of incorporating some knowledge engineering techniques to the processes of software engineering. Precisely, we are referring to the knowledge eduction techniques. We know the difficulty of requirements acquisition and its importance to minimise the risks of a software project, both in the development phase and in the maintenance phase. To capture the functional requirements use cases are generally used. However, as we will show in this paper, this technique is insufficient when the problem domain knowledge is only in the "experts? mind". In this situation, the combination of the use case with eduction techniques, in every development phase, will let us to discover the correct requirements.