939 resultados para SOFTWARE QUALITY CLASSIFICATION
Resumo:
Our previous research about possible quality improvements in Extreme Programming (XP) led us to a conclusion that XP supports many good engineering practices but there is still place for refinements. Our proposal was to add dedicated Quality Assurance (QA) measures, which should be sufficiently effective and at the same time simpler enough in the context of XP. This paper intends to analyze the possibilities for an effective way for applying approved quality assurance practices to XP. The last should not affect negatively to the process and in the meantime must lead to better quality assurance. We aim to make changes to XP that even if would slow down a bit the development process, will make it more suitable for widest range of projects including large and very large projects as well as life critical and highly reliable systems.
Resumo:
Software bug analysis is one of the most important activities in Software Quality. The rapid and correct implementation of the necessary repair influence both developers, who must leave the fully functioning software, and users, who need to perform their daily tasks. In this context, if there is an incorrect classification of bugs, there may be unwanted situations. One of the main factors to be assigned bugs in the act of its initial report is severity, which lives up to the urgency of correcting that problem. In this scenario, we identified in datasets with data extracted from five open source systems (Apache, Eclipse, Kernel, Mozilla and Open Office), that there is an irregular distribution of bugs with respect to existing severities, which is an early sign of misclassification. In the dataset analyzed, exists a rate of about 85% bugs being ranked with normal severity. Therefore, this classification rate can have a negative influence on software development context, where the misclassified bug can be allocated to a developer with little experience to solve it and thus the correction of the same may take longer, or even generate a incorrect implementation. Several studies in the literature have disregarded the normal bugs, working only with the portion of bugs considered severe or not severe initially. This work aimed to investigate this portion of the data, with the purpose of identifying whether the normal severity reflects the real impact and urgency, to investigate if there are bugs (initially classified as normal) that could be classified with other severity, and to assess if there are impacts for developers in this context. For this, an automatic classifier was developed, which was based on three algorithms (Näive Bayes, Max Ent and Winnow) to assess if normal severity is correct for the bugs categorized initially with this severity. The algorithms presented accuracy of about 80%, and showed that between 21% and 36% of the bugs should have been classified differently (depending on the algorithm), which represents somewhere between 70,000 and 130,000 bugs of the dataset.
Resumo:
The intensive character in knowledge of software production and its rising demand suggest the need to establish mechanisms to properly manage the knowledge involved in order to meet the requirements of deadline, costs and quality. The knowledge capitalization is a process that involves from identification to evaluation of the knowledge produced and used. Specifically, for software development, capitalization enables easier access, minimize the loss of knowledge, reducing the learning curve, avoid repeating errors and rework. Thus, this thesis presents the know-Cap, a method developed to organize and guide the capitalization of knowledge in software development. The Know-Cap facilitates the location, preservation, value addition and updating of knowledge, in order to use it in the execution of new tasks. The method was proposed from a set of methodological procedures: literature review, systematic review and analysis of related work. The feasibility and appropriateness of Know-Cap were analyzed from an application study, conducted in a real case, and an analytical study of software development companies. The results obtained indicate the Know- Cap supports the capitalization of knowledge in software development.
Resumo:
Security defects are common in large software systems because of their size and complexity. Although efficient development processes, testing, and maintenance policies are applied to software systems, there are still a large number of vulnerabilities that can remain, despite these measures. Some vulnerabilities stay in a system from one release to the next one because they cannot be easily reproduced through testing. These vulnerabilities endanger the security of the systems. We propose vulnerability classification and prediction frameworks based on vulnerability reproducibility. The frameworks are effective to identify the types and locations of vulnerabilities in the earlier stage, and improve the security of software in the next versions (referred to as releases). We expand an existing concept of software bug classification to vulnerability classification (easily reproducible and hard to reproduce) to develop a classification framework for differentiating between these vulnerabilities based on code fixes and textual reports. We then investigate the potential correlations between the vulnerability categories and the classical software metrics and some other runtime environmental factors of reproducibility to develop a vulnerability prediction framework. The classification and prediction frameworks help developers adopt corresponding mitigation or elimination actions and develop appropriate test cases. Also, the vulnerability prediction framework is of great help for security experts focus their effort on the top-ranked vulnerability-prone files. As a result, the frameworks decrease the number of attacks that exploit security vulnerabilities in the next versions of the software. To build the classification and prediction frameworks, different machine learning techniques (C4.5 Decision Tree, Random Forest, Logistic Regression, and Naive Bayes) are employed. The effectiveness of the proposed frameworks is assessed based on collected software security defects of Mozilla Firefox.
Resumo:
The activity of validating identified requirements for an information system helps to improve the quality of a requirements specification document and, consequently, the success of a project. Although various different support tools to requirements engineering exist in the market, there is still a lack of automated support for validation activity. In this context, the purpose of this paper is to make up for that deficiency, with the use of an automated tool, to provide the resources for the execution of an adequate validation activity. The contribution of this study is to enable an agile and effective follow-up of the scope established for the requirements, so as to lead the development to a solution which would satisfy the real necessities of the users, as well as to supply project managers with relevant information about the maturity of the analysts involved in requirements specification.
Resumo:
This paper presents a catalog of smells in the context of interactive applications. These so-called usability smells are indicators of poor design on an application’s user interface, with the potential to hinder not only its usability but also its maintenance and evolution. To eliminate such usability smells we discuss a set of program/usability refactorings. In order to validate the presented usability smells catalog, and the associated refactorings, we present a preliminary empirical study with software developers in the context of a real open source hospital management application. Moreover, a tool that computes graphical user interface behavior models, giving the applications’ source code, is used to automatically detect usability smells at the model level.
Resumo:
Dissertação de natureza científica para obtenção do grau de Mestre em Engenharia Informática e de Computadores
Resumo:
Trabalho realizado sob orientação do Prof. António Brandão Moniz para a disciplina “Factores Sociais da Inovação” do Mestrado Engenharia Informática realizado na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa
Resumo:
Dissertation presented to obtain a Masters degree in Computer Science
Resumo:
OBJECTIVE To estimate the prevalence of hepatitis C virus infection in Brazil’s inmate population.METHODS Systematic review on hepatitis C virus infection in the inmate population. Brazilian studies published from January 1, 1989 to February 20, 2014 were evaluated. The methodological quality of the studies was assessed using a scale of 0 to 8 points.RESULTS Eleven eligible studies were analyzed and provided data on hepatitis C virus infection among 4,375 inmates from seven states of Brazil, with a mean quality classification of 7.4. The overall hepatitis C virus prevalence among Brazilian inmates was 13.6% (ranging from 1.0% to 41.0%, depending on the study). The chances of inmates being seropositive for hepatitis C virus in the states of Minas Gerais (MG), Sergipe (SE), Mato Grosso do Sul (MS), Rio Grande do Sul (RS), Goiás (GO) and Espirito Santo (ES) were 84.0% (95%CI 0.06;0.45), 92.0% (95%CI 0.04;0.13), 88.0% (95%CI 0.09;0.18), 74.0% (95%CI 0.16;0.42), 84.0% (95%CI 0.08;0.31) and 89.0% (95%CI 0.01;0.05) respectively, lower than that observed in the Sao Paulo state (seroprevalence of 29.3%). The four studies conducted in the city of Sao Paulo revealed a lower prevalence in more recent studies compared to older ones.CONCLUSIONS The highest prevalence of hepatitis C virus infection in Brazil’s inmate population was found in Sao Paulo, which may reflect the urban diversity of the country. Despite Brazilian studies having good methodological quality to evaluate the prevalence of the hepatitis C virus, they are scarce and lack data on risk factors associated with this infection, which could support decisions on prevention and implementation of public health policies for Brazilian prisons.
Resumo:
Requirements Engineering has been acknowledged an essential discipline for Software Quality. Poorly-defined processes for eliciting, analyzing, specifying and validating requirements can lead to unclear issues or misunderstandings on business needs and project’s scope. These typically result in customers’ non-satisfaction with either the products’ quality or the increase of the project’s budget and duration. Maturity models allow an organization to measure the quality of its processes and improve them according to an evolutionary path based on levels. The Capability Maturity Model Integration (CMMI) addresses the aforementioned Requirements Engineering issues. CMMI defines a set of best practices for process improvement that are divided into several process areas. Requirements Management and Requirements Development are the process areas concerned with Requirements Engineering maturity. Altran Portugal is a consulting company concerned with the quality of its software. In 2012, the Solution Center department has developed and applied successfully a set of processes aligned with CMMI-DEV v1.3, what granted them a Level 2 maturity certification. For 2015, they defined an organizational goal of addressing CMMI-DEV maturity level 3. This MSc dissertation is part of this organization effort. In particular, it is concerned with the required process areas that address the activities of Requirements Engineering. Our main goal is to contribute for the development of Altran’s internal engineering processes to conform to the guidelines of the Requirements Development process area. Throughout this dissertation, we started with an evaluation method based on CMMI and conducted a compliance assessment of Altran’s current processes. This allowed demonstrating their alignment with the CMMI Requirements Management process area and to highlight the improvements needed to conform to the Requirements Development process area. Based on the study of alternative solutions for the gaps found, we proposed a new Requirements Management and Development process that was later validated using three different approaches. The main contribution of this dissertation is the new process developed for Altran Portugal. However, given that studies on these topics are not abundant in the literature, we also expect to contribute with useful evidences to the existing body of knowledge with a survey on CMMI and requirements engineering trends. Most importantly, we hope that the implementation of the proposed processes’ improvements will minimize the risks of mishandled requirements, increasing Altran’s performance and taking them one step further to the desired maturity level.
Resumo:
Ketterillä menetelmillä tarkoitetaan erilaisista hyväksi havaituista ohjelmistotuotannon menetelmistä luotua sekä teoreettista että käytännöllistä viitekehystä. Nykyaikaiset ohjelmistotuotannon menetelmät, ketterät menetelmät ja käytettävyyssuunnittelu, vievät ohjelmistokehitystä kohtiasiakaslähtöisempää lähestymistapaa. Ohjelmien laadun takaamiseksi asiakas osallistuu tiiviisti jo ohjelmiston tuotantovaiheessa, jolloin turhilta ominaisuuksilta ja vääriltä ratkaisuilta vältytään paremmin. Tässä työssä käsitellään tapoja, joilla pk-yritys voisi parantaa toimintaansa ja saavuttaa siten kilpailuetua sovelluskehityksessä. Pk-yritys on suurempia yrityksiä paremmassa asemassa siinä, että se on luontaisesti ketterä ja nopea käännöksissään, mutta siltä puuttuu perinteet ohjelmistokehityksessä ja siksi käytössä voi olla kehittymättömiä ratkaisuja. Yrityksissä ohjelmistotuotannon muuttaminen kohti ketterämpiä menetelmiä ei ole mahdotonta, mutta se vaatii sekä työntekijöiltä että sidosryhmiltä halua ja sitoutumista kehitykseen. Jos edellä mainittuja asioita ei löydy, ei ketteriin menetelmiin siirtyminen ole järkevää, vaan yrityksen kannattaa pitäytyä nykyisissä menetelmissä ja kehittää niitä. Työssä käsitellään myös käytettävyyden suunnittelua ja sen toteutusta hyvin pienin muutoksin perinteisiin työtapoihin. Lähtökohtaisesti voidaan ajatella, etteivät pk-yrityksen voimavarat riitä täysimittaiseen käytettävyyssuunnitteluun, siksi työssä ehdotetaan keveitä ratkaisuja, joilla voidaan kuitenkin huomattavasti parantaa ohjelmiston käyttökokemusta.
Resumo:
Terrestrial Trunked Radio (TETRA) on moderni digitaalinen matkapuhelinjärjestelmän standardi, joka on suunniteltu täyttämään erityisesti viranomaisten vaativat tarpeet turvallisuuden ja luotettavuuden suhteen. Ohjelmiston testaus on tärkeä osa sen laadun takaamiseksi. Testaus on jaettu useisiin vaiheisiin ja se kattaa koko ohjelmiston elinkaaren: ohjelmiston kehittelystä alkaen asiakkaalle lähetettyyn valmiiseen tuotteeseen saakka. Toiminnallisuustestauksen suorittaa joko ohjelmiston suunnittelijat tai erillinen testausryhmä käyttäen Nokia TETRA-järjestelmän testauslaboratoriota. Testauksen tarkoituksena on varmistaa, että ohjelmisto, sen aliohjelmat ja ominaisuudet täyttävät niille annetut toiminnalliset ja laadulliset vaatimukset. Tämä diplomityö antaa yleiskuvan toiminnallisuustestausprosessista Nokia TETRA järjestelmän laboratoriossa. Se tarjoaa esimerkkitestitapauksen avulla kokonaiskuvan siitä, kuinka toiminnallisuustestausprosessi suoritetaan alusta loppuun.
Resumo:
Diplomityössä tutkitaan, kuinka Symbian-sovelluskehitystä voitaisiin tehostaa. Työssä esitellään Symbian-käyttöjärjestelmä, sekä pohditaan haasteita ja rajoitteita joita Symbian sovelluskehityksessä kohdataan. Myöskin jo olemassa olevia kehitystapoja pohditaan työn tavoitteen kannalta. Symbian-sovelluskehityksessä tehdään toistuvasti samoja asioita. Koska Symbian on avoin käyttöjärjestelmä, sovelluskehittäjiä on paljon. Tehokkaamman kehitystavan löytäminen säästäisi paljon resursseja. Tällä hetkellä perinteiset ohjelmointitavat näyttävät olevan suosituin tapa kehittää sovelluksia. Kuitenkin on jo olemassa useita ratkaisuja, jotka pyrkivät tehostamaan sovelluskehitystä, mikä todistaa tarpeen kehittää tehokkuutta. Työssä toteutettu systeemi ajaa Symbian sovelluksia XML-määrityksen pohjalta. Kun käytetään XML-määritystä C++-koodin sijasta, sovelluskehitys muuttuu. Näiden muutosten täytyy kuitenkin olla myönteisiä, eivätkä ne saa haitata ohjelmiston laatua tai käytettävyyttä.