963 resultados para software quality attribute
Resumo:
Tutkielman tavoitteena oli selvittää, onko tutkielman tilaajan toteuttaman kannattavuusraportoinnin laatu käyttäjien mielestä riittävä. Kannattavuusraportointi on toteutettu data warehouse tekniikalla. Tutkielman tavoitteina oli myös määrittää, mitä ohjelmiston laatu tarkoittaa ja miten sitä voidaan arvioida. Tutkimuksessa käytettiin kvalitatiivista tutkimusmenetelmää. Laadun arviointiin käytetty aineisto kerättiin haastattelemalla seitsemäätoista kannattavuusraportoinnin aktiivikäyttäjää. Tutkielmassa ohjelmiston laatu tarkoittaa sen kykyä täyttää tai ylittää käyttäjiensä kohtuulliset toiveet ja odotukset. Laatua arvioitiin standardin ISO/IEC 9126 määrittelemällä kuudella laatuominaisuudella, jotka kuvaavat minimaalisella päällekkäisyydellä ohjelmiston laadun. Lisäksi arvioinnissa hyödynnettiin varsinaiseen standardiin kuulumatonta informatiivista liitettä, joka tarkentaa ISO/IEC 9126 standardissa esitettyjä laadun ominaispiirteitä. Tutkimuksen tuloksena voidaan todeta, että käyttäjien mukaan kannattavuusraportointi on tarpeeksi laadukas, sillä se pystyy tarjoamaan helppokäyttöisiä, oikeanmuotoisia raportteja riittävän hyvällä vasteajalla käyttäjien tarpeisiin. Tehokkaasta hyödyntämisestä voidaan päätellä data warehousen rakentamisen onnistuneen. Tutkimuksessa nousi esiin myös runsaasti kehittämis- ja parannusideoita, jotka toimivat yhtenä kehitystyön apuvälineenä tulevaisuudessa.
Resumo:
Tämä työ on tehty osana MASTO-tutkimushanketta, jonka tarkoituksena on kehittää ohjelmistotestauksen adaptiivinen referenssimalli. Työ toteutettiin tilastollisena tutkimuksena käyttäen survey-menetelmää. Tutkimuksessa haastateltiin 31 organisaatioyksikköä eri puolelta suomea, jotka tekevät keskikriittisiä sovelluksia. Tutkimuksen hypoteeseina oli laadun riippuvuus ohjelmistokehitysmenetelmästä, asiakkaan osallistumisesta, standardin toteutumisesta, asiakassuhteesta, liiketoimintasuuntautuneisuudesta, kriittisyydestä, luottamuksesta ja testauksen tasosta. Hypoteeseista etsittiin korrelaatiota laadun kanssa tekemällä korrelaatio ja regressioanalyysi. Lisäksi tutkimuksessa kartoitettiin minkälaisia ohjelmistokehitykseen liittyviä käytäntöjä, menetelmiä ja työkaluja organisaatioyksiköissä käytettiin, ongelmia ja parannusehdotuksia liittyen ohjelmistotestaukseen, merkittävimpiä tapoja asiakkaan vaikuttamiseksi ohjelmiston laatuun sekä suurimpia hyötyjä ja haittoja ohjelmistokehityksen tai testauksen ulkoistamisessa. Tutkimuksessa havaittiin, että laatu korreloi positiivisesti ja tilastollisesti merkitsevästi testauksen tason, standardin toteutumisen, asiakasosallistumisen suunnitteluvaiheessa sekä asiakasosallistumisen ohjaukseen kanssa, luottamuksen ja yhden asiakassuhteeseen liittyvän osakysymyksen kanssa. Regressioanalyysin perusteella muodostettiin regressioyhtälö, jossa laadun todettiin positiivisesti riippuvan standardin toteutumisesta, asiakasosallistumisesta suunnitteluvaiheessa sekä luottamuksesta.
Resumo:
Intermolecular forces are a useful concept that can explain the attraction between particulate matter as well as numerous phenomena in our lives such as viscosity, solubility, drug interactions, and dyeing of fibers. However, studies show that students have difficulty understanding this important concept, which has led us to develop a free educational software in English and Portuguese. The software can be used interactively by teachers and students, thus facilitating better understanding. Professors and students, both graduate and undergraduate, were questioned about the software quality and its intuitiveness of use, facility of navigation, and pedagogical application using a Likert scale. The results led to the conclusion that the developed computer application can be characterized as an auxiliary tool to assist teachers in their lectures and students in their learning process of intermolecular forces.
Resumo:
Testaushallinta on ohjelmiston laadunvarmistusprosessin oleellinen osa, joka tarvitsee onnistuakseen työkalun. Testaushallintaohjelmiston tarjoaminen SaaS-palveluna luo mahdollisuuden tarjota tämän työkalun helposti ja kustannustehokkaasti, sekä valmiiksi määritellyin prosessein. Tässä työssä tutkitaan testaushallintaohjelmiston SaaS-palveluna tarjoamisen mahdollisuuksia ja rajoitteita pienen projektin näkökulmasta. SaaS-Palvelumallia tutkitaan osiensa muodostamana kokonaisuutena ja selvitetään mallin soveltumista testaushallintapalvelun tuottamiseen. Lisäksi tutkitaan tapaustutkimuksena kyselyn ja haastattelun avulla käyttäjien kokemuksia SaaS-palveluna toteutetun testaushallintaohjelmiston käyttämisestä. Tutkimuksen tulokset viittaavat siihen, ettei SaaS-malli luo erityisiä rajoitteita testaushallintaohjelmiston tarjoamiseen laadunvarmistusprojektin käyttöön ainakaan tutkitussa mittakaavassa. Palvelun toimintamallien suunnitteluun ja vastuiden jakoon on kiinnitettävä erityistä huomiota, jotta palvelu voidaan toimittaa loppukäyttäjien tarpeita mahdollisimman paljon huomioiden.
Resumo:
Human beings have always strived to preserve their memories and spread their ideas. In the beginning this was always done through human interpretations, such as telling stories and creating sculptures. Later, technological progress made it possible to create a recording of a phenomenon; first as an analogue recording onto a physical object, and later digitally, as a sequence of bits to be interpreted by a computer. By the end of the 20th century technological advances had made it feasible to distribute media content over a computer network instead of on physical objects, thus enabling the concept of digital media distribution. Many digital media distribution systems already exist, and their continued, and in many cases increasing, usage is an indicator for the high interest in their future enhancements and enriching. By looking at these digital media distribution systems, we have identified three main areas of possible improvement: network structure and coordination, transport of content over the network, and the encoding used for the content. In this thesis, our aim is to show that improvements in performance, efficiency and availability can be done in conjunction with improvements in software quality and reliability through the use of formal methods: mathematical approaches to reasoning about software so that we can prove its correctness, together with the desirable properties. We envision a complete media distribution system based on a distributed architecture, such as peer-to-peer networking, in which different parts of the system have been formally modelled and verified. Starting with the network itself, we show how it can be formally constructed and modularised in the Event-B formalism, such that we can separate the modelling of one node from the modelling of the network itself. We also show how the piece selection algorithm in the BitTorrent peer-to-peer transfer protocol can be adapted for on-demand media streaming, and how this can be modelled in Event-B. Furthermore, we show how modelling one peer in Event-B can give results similar to simulating an entire network of peers. Going further, we introduce a formal specification language for content transfer algorithms, and show that having such a language can make these algorithms easier to understand. We also show how generating Event-B code from this language can result in less complexity compared to creating the models from written specifications. We also consider the decoding part of a media distribution system by showing how video decoding can be done in parallel. This is based on formally defined dependencies between frames and blocks in a video sequence; we have shown that also this step can be performed in a way that is mathematically proven correct. Our modelling and proving in this thesis is, in its majority, tool-based. This provides a demonstration of the advance of formal methods as well as their increased reliability, and thus, advocates for their more wide-spread usage in the future.
Resumo:
Un objectif principal du génie logiciel est de pouvoir produire des logiciels complexes, de grande taille et fiables en un temps raisonnable. La technologie orientée objet (OO) a fourni de bons concepts et des techniques de modélisation et de programmation qui ont permis de développer des applications complexes tant dans le monde académique que dans le monde industriel. Cette expérience a cependant permis de découvrir les faiblesses du paradigme objet (par exemples, la dispersion de code et le problème de traçabilité). La programmation orientée aspect (OA) apporte une solution simple aux limitations de la programmation OO, telle que le problème des préoccupations transversales. Ces préoccupations transversales se traduisent par la dispersion du même code dans plusieurs modules du système ou l’emmêlement de plusieurs morceaux de code dans un même module. Cette nouvelle méthode de programmer permet d’implémenter chaque problématique indépendamment des autres, puis de les assembler selon des règles bien définies. La programmation OA promet donc une meilleure productivité, une meilleure réutilisation du code et une meilleure adaptation du code aux changements. Très vite, cette nouvelle façon de faire s’est vue s’étendre sur tout le processus de développement de logiciel en ayant pour but de préserver la modularité et la traçabilité, qui sont deux propriétés importantes des logiciels de bonne qualité. Cependant, la technologie OA présente de nombreux défis. Le raisonnement, la spécification, et la vérification des programmes OA présentent des difficultés d’autant plus que ces programmes évoluent dans le temps. Par conséquent, le raisonnement modulaire de ces programmes est requis sinon ils nécessiteraient d’être réexaminés au complet chaque fois qu’un composant est changé ou ajouté. Il est cependant bien connu dans la littérature que le raisonnement modulaire sur les programmes OA est difficile vu que les aspects appliqués changent souvent le comportement de leurs composantes de base [47]. Ces mêmes difficultés sont présentes au niveau des phases de spécification et de vérification du processus de développement des logiciels. Au meilleur de nos connaissances, la spécification modulaire et la vérification modulaire sont faiblement couvertes et constituent un champ de recherche très intéressant. De même, les interactions entre aspects est un sérieux problème dans la communauté des aspects. Pour faire face à ces problèmes, nous avons choisi d’utiliser la théorie des catégories et les techniques des spécifications algébriques. Pour apporter une solution aux problèmes ci-dessus cités, nous avons utilisé les travaux de Wiels [110] et d’autres contributions telles que celles décrites dans le livre [25]. Nous supposons que le système en développement est déjà décomposé en aspects et classes. La première contribution de notre thèse est l’extension des techniques des spécifications algébriques à la notion d’aspect. Deuxièmement, nous avons défini une logique, LA , qui est utilisée dans le corps des spécifications pour décrire le comportement de ces composantes. La troisième contribution consiste en la définition de l’opérateur de tissage qui correspond à la relation d’interconnexion entre les modules d’aspect et les modules de classe. La quatrième contribution concerne le développement d’un mécanisme de prévention qui permet de prévenir les interactions indésirables dans les systèmes orientés aspect.
Resumo:
Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.
Resumo:
In this paper we describe an exploratory assessment of the effect of aspect-oriented programming on software maintainability. An experiment was conducted in which 11 software professionals were asked to carry out maintenance tasks on one of two programs. The first program was written in Java and the second in AspectJ. Both programs implement a shopping system according to the same set of requirements. A number of statistical hypotheses were tested. The results did seem to suggest a slight advantage for the subjects using the object-oriented system since in general it took the subjects less time to answer the questions on this system. Also, both systems appeared to be equally difficult to modify. However, the results did not show a statistically significant influence of aspect-oriented programming at the 5% level. We are aware that the results of this single small study cannot be generalized. We conclude that more empirical research is necessary in this area to identify the benefits of aspect-oriented programming and we hope that this paper will encourage such research.
Resumo:
Single-page applications have historically been subject to strong market forces driving fast development and deployment in lieu of quality control and changeable code, which are important factors for maintainability. In this report we develop two functionally equivalent applications using AngularJS and React and compare their maintainability as defined by ISO/IEC 9126. AngularJS and React represent two distinct approaches to web development, with AngularJS being a general framework providing rich base functionality and React a small specialized library for efficient view rendering. The quality comparison was accomplished by calculating Maintainability Index for each application. Version control analysis was used to determine quality indicators during development and subsequent maintenance where new functionality was added in two steps. The results show no major differences in maintainability in the initial applications. As more functionality is added the Maintainability Index decreases faster in the AngularJS application, indicating a steeper increase in complexity compared to the React application. Source code analysis reveals that changes in data flow requires significantly larger modifications of the AngularJS application due to its inherent architecture for data flow. We conclude that frameworks are useful when they facilitate development of known requirements but less so when applications and systems grow in size.
Resumo:
Over the years the use of application frameworks designed for the View and Controller layers of MVC architectural pattern adapted to web applications has become very popular. These frameworks are classified into Actions Oriented and Components Oriented , according to the solution strategy adopted by the tools. The choice of such strategy leads the system architecture design to acquire non-functional characteristics caused by the way the framework influences the developer to implement the system. The components reusability is one of those characteristics and plays a very important role for development activities such as system evolution and maintenance. The work of this dissertation consists to analyze of how the reusability could be influenced by the Web frameworks usage. To accomplish this, small academic management applications were developed using the latest versions of Apache Struts and JavaServer Faces frameworks, the main representatives of Java plataform Web frameworks of. For this assessment was used a software quality model that associates internal attributes, which can be measured objectively, to the characteristics in question. These attributes and metrics defined for the model were based on some work related discussed in the document
Resumo:
Formal methods and software testing are tools to obtain and control software quality. When used together, they provide mechanisms for software specification, verification and error detection. Even though formal methods allow software to be mathematically verified, they are not enough to assure that a system is free of faults, thus, software testing techniques are necessary to complement the process of verification and validation of a system. Model Based Testing techniques allow tests to be generated from other software artifacts such as specifications and abstract models. Using formal specifications as basis for test creation, we can generate better quality tests, because these specifications are usually precise and free of ambiguity. Fernanda Souza (2009) proposed a method to define test cases from B Method specifications. This method used information from the machine s invariant and the operation s precondition to define positive and negative test cases for an operation, using equivalent class partitioning and boundary value analysis based techniques. However, the method proposed in 2009 was not automated and had conceptual deficiencies like, for instance, it did not fit in a well defined coverage criteria classification. We started our work with a case study that applied the method in an example of B specification from the industry. Based in this case study we ve obtained subsidies to improve it. In our work we evolved the proposed method, rewriting it and adding characteristics to make it compatible with a test classification used by the community. We also improved the method to support specifications structured in different components, to use information from the operation s behavior on the test case generation process and to use new coverage criterias. Besides, we have implemented a tool to automate the method and we have submitted it to more complex case studies
Resumo:
The activity of validating identified requirements for an information system helps to improve the quality of a requirements specification document and, consequently, the success of a project. Although various different support tools to requirements engineering exist in the market, there is still a lack of automated support for validation activity. In this context, the purpose of this paper is to make up for that deficiency, with the use of an automated tool, to provide the resources for the execution of an adequate validation activity. The contribution of this study is to enable an agile and effective follow-up of the scope established for the requirements, so as to lead the development to a solution which would satisfy the real necessities of the users, as well as to supply project managers with relevant information about the maturity of the analysts involved in requirements specification.
Resumo:
This work describes a new web system to aid project management that was created to correct the principal deficiencies identified in systems having a common purpose which are at present available, as well as to follow the guidelines that are proposed in the Project Management Body of Knowledge (PMBoK) and the quality characteristics described in the ISO/IEC 9126 norm. As from the adopted methodology, the system was structured to attend the real necessities of project managers and also to contribute towards obtaining quality results from the projects. The validation of the proposed solution was done with the collaboration of professionals that used the functions available in it for a period of 15 days. Results attested to the quality and adequacy of the developed system.