896 resultados para software quality metrics


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Un objectif principal du génie logiciel est de pouvoir produire des logiciels complexes, de grande taille et fiables en un temps raisonnable. La technologie orientée objet (OO) a fourni de bons concepts et des techniques de modélisation et de programmation qui ont permis de développer des applications complexes tant dans le monde académique que dans le monde industriel. Cette expérience a cependant permis de découvrir les faiblesses du paradigme objet (par exemples, la dispersion de code et le problème de traçabilité). La programmation orientée aspect (OA) apporte une solution simple aux limitations de la programmation OO, telle que le problème des préoccupations transversales. Ces préoccupations transversales se traduisent par la dispersion du même code dans plusieurs modules du système ou l’emmêlement de plusieurs morceaux de code dans un même module. Cette nouvelle méthode de programmer permet d’implémenter chaque problématique indépendamment des autres, puis de les assembler selon des règles bien définies. La programmation OA promet donc une meilleure productivité, une meilleure réutilisation du code et une meilleure adaptation du code aux changements. Très vite, cette nouvelle façon de faire s’est vue s’étendre sur tout le processus de développement de logiciel en ayant pour but de préserver la modularité et la traçabilité, qui sont deux propriétés importantes des logiciels de bonne qualité. Cependant, la technologie OA présente de nombreux défis. Le raisonnement, la spécification, et la vérification des programmes OA présentent des difficultés d’autant plus que ces programmes évoluent dans le temps. Par conséquent, le raisonnement modulaire de ces programmes est requis sinon ils nécessiteraient d’être réexaminés au complet chaque fois qu’un composant est changé ou ajouté. Il est cependant bien connu dans la littérature que le raisonnement modulaire sur les programmes OA est difficile vu que les aspects appliqués changent souvent le comportement de leurs composantes de base [47]. Ces mêmes difficultés sont présentes au niveau des phases de spécification et de vérification du processus de développement des logiciels. Au meilleur de nos connaissances, la spécification modulaire et la vérification modulaire sont faiblement couvertes et constituent un champ de recherche très intéressant. De même, les interactions entre aspects est un sérieux problème dans la communauté des aspects. Pour faire face à ces problèmes, nous avons choisi d’utiliser la théorie des catégories et les techniques des spécifications algébriques. Pour apporter une solution aux problèmes ci-dessus cités, nous avons utilisé les travaux de Wiels [110] et d’autres contributions telles que celles décrites dans le livre [25]. Nous supposons que le système en développement est déjà décomposé en aspects et classes. La première contribution de notre thèse est l’extension des techniques des spécifications algébriques à la notion d’aspect. Deuxièmement, nous avons défini une logique, LA , qui est utilisée dans le corps des spécifications pour décrire le comportement de ces composantes. La troisième contribution consiste en la définition de l’opérateur de tissage qui correspond à la relation d’interconnexion entre les modules d’aspect et les modules de classe. La quatrième contribution concerne le développement d’un mécanisme de prévention qui permet de prévenir les interactions indésirables dans les systèmes orientés aspect.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we describe an exploratory assessment of the effect of aspect-oriented programming on software maintainability. An experiment was conducted in which 11 software professionals were asked to carry out maintenance tasks on one of two programs. The first program was written in Java and the second in AspectJ. Both programs implement a shopping system according to the same set of requirements. A number of statistical hypotheses were tested. The results did seem to suggest a slight advantage for the subjects using the object-oriented system since in general it took the subjects less time to answer the questions on this system. Also, both systems appeared to be equally difficult to modify. However, the results did not show a statistically significant influence of aspect-oriented programming at the 5% level. We are aware that the results of this single small study cannot be generalized. We conclude that more empirical research is necessary in this area to identify the benefits of aspect-oriented programming and we hope that this paper will encourage such research.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Single-page applications have historically been subject to strong market forces driving fast development and deployment in lieu of quality control and changeable code, which are important factors for maintainability. In this report we develop two functionally equivalent applications using AngularJS and React and compare their maintainability as defined by ISO/IEC 9126. AngularJS and React represent two distinct approaches to web development, with AngularJS being a general framework providing rich base functionality and React a small specialized library for efficient view rendering. The quality comparison was accomplished by calculating Maintainability Index for each application. Version control analysis was used to determine quality indicators during development and subsequent maintenance where new functionality was added in two steps.   The results show no major differences in maintainability in the initial applications. As more functionality is added the Maintainability Index decreases faster in the AngularJS application, indicating a steeper increase in complexity compared to the React application. Source code analysis reveals that changes in data flow requires significantly larger modifications of the AngularJS application due to its inherent architecture for data flow. We conclude that frameworks are useful when they facilitate development of known requirements but less so when applications and systems grow in size.

Relevância:

80.00% 80.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: The journal impact factor (IF) has become widely used as an absolute measure of the quality of professional journals. It is also increasingly used as a tool for measuring the academic performance of researchers and to inform decisions concerning the appointment and tenure of academic staff as well as the viability of their departments/schools. In keeping with these IF-related trends, nurse researchers and faculty the world over are being increasingly expected to publish only in journals that have a high IF and to abandon all other forms of publishing (including books and book chapters) that do not attract IF rankings.

Issues: The IF obsession is placing in jeopardy the sustainability and hence viability of nursing journals and academic nursing publication lists (academic texts). If nurse authors abandon their publishing agenda and publish only in 'elite' journals (many of which may be outside nursing), the capacity of the nursing profession to develop and control the cutting edge of its disciplinary knowledge could be placed at risk.

Actions: Other means for assessing the quality and impact of nursing journals need to be devised. In addition, other works (such as books and book chapters) need also to be included in quality metrics. Nurse authors and journal editors must work together and devise ways to ensure the sustainability and viability of nursing publications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Multi-frame super-resolution algorithms aim to increase spatial resolution by fusing information from several low-resolution perspectives of a scene. While a wide array of super-resolution algorithms now exist, the comparative capability of these techniques in practical scenarios has not been adequately explored. In addition, a standard quantitative method for assessing the relative merit of super-resolution algorithms is required. This paper presents a comprehensive practical comparison of existing super-resolution techniques using a shared platform and 4 common greyscale reference images. In total, 13 different super-resolution algorithms are evaluated, and as accurate alignment is critical to the super-resolution process, 6 registration algorithms are also included in the analysis. Pixel-based visual information fidelity (VIFP) is selected from the 12 image quality metrics reviewed as the measure most suited to the appraisal of super-resolved images. Experimental results show that Bayesian super-resolution methods utilizing the simultaneous autoregressive (SAR) prior produce the highest quality images when combined with generalized stochastic Lucas-Kanade optical flow registration.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Continuous content management of health information portals is a feature vital for its sustainability and widespread acceptance. Knowledge and experience of a domain expert is essential for content management in the health domain. The rate of generation of online health resources is exponential and thereby manual examination for relevance to a specific topic and audience is a formidable challenge for domain experts. Intelligent content discovery for effective content management is a less researched topic. An existing expert-endorsed content repository can provide the necessary leverage to automatically identify relevant resources and evaluate qualitative metrics.Objective: This paper reports on the design research towards an intelligent technique for automated content discovery and ranking for health information portals. The proposed technique aims to improve efficiency of the current mostly manual process of portal content management by utilising an existing expert-endorsed content repository as a supporting base and a benchmark to evaluate the suitability of new contentMethods: A model for content management was established based on a field study of potential users. The proposed technique is integral to this content management model and executes in several phases (ie, query construction, content search, text analytics and fuzzy multi-criteria ranking). The construction of multi-dimensional search queries with input from Wordnet, the use of multi-word and single-word terms as representative semantics for text analytics and the use of fuzzy multi-criteria ranking for subjective evaluation of quality metrics are original contributions reported in this paper.Results: The feasibility of the proposed technique was examined with experiments conducted on an actual health information portal, the BCKOnline portal. Both intermediary and final results generated by the technique are presented in the paper and these help to establish benefits of the technique and its contribution towards effective content management.Conclusions: The prevalence of large numbers of online health resources is a key obstacle for domain experts involved in content management of health information portals and websites. The proposed technique has proven successful at search and identification of resources and the measurement of their relevance. It can be used to support the domain expert in content management and thereby ensure the health portal is up-to-date and current.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A presente tese pretende contribuir criticamente para o entendimento das intrincadas relações existentes entre o Estado, o capital e a produção acadêmica. Para isso, se propôs a interpretar as relações acadêmicas de produção na pós-graduação em Administração no Brasil articulando-as com categorias analíticas mais amplas, delineadas de forma a fornecer um quadro, ao fundo, da economia-política. O pressuposto de que a atual intensificação dos ritmos de produção acadêmica contrasta com um passado – idealizado – de ciência contemplativa precisou ser confrontado com o desenvolvimento histórico da educação superior e da pós-graduação no país objetivando-se demover certas mitificações do debate. O Estado, em sua versão reformada a partir do ideário friedmaniano, o conceito de capital monopolista e a teoria do processo de trabalho forneceram suporte teórico-metodológico – e empírico – para a interpretação do quadro político-econômico proposto. Para a passagem do geral para o particular – das conexões entre o Estado e o capital à produção acadêmica – recorreu-se à coleta de dados em duas frentes: (i) analisou-se a produção acadêmica de todos os 168 pesquisadores-doutores bolsistas (até março de 2014) em Produtividade em Pesquisa (PQ) do CNPq na área de Administração e (ii) realizou-se entrevistas em profundidade com pesquisadores-doutores e doutorandos dos mais variados programas de pós-graduação em Administração do país. Os resultados foram inquietantes: verificou-se que está em curso um processo de intensificação da incorporação da mão-de-obra formada por alunos-orientandos às estruturas pedagógico-produtivas dos cursos de pós-graduação. Os orientandos respondem pela parcela mais substantiva do total da produção acadêmica, enquanto que os processos de trabalho aprofundam re-significações das atribuições dos cursos de pós-graduação e intensificam a divisão do trabalho, com impactos diversos nas relações entre os sujeitos da pós-graduação. Quando se procede ao movimento analítico inverso – das relações no interior da pós-graduação em Administração no Brasil para o quadro da economia-política posicionado ao fundo – observa-se que o Estado (principalmente através da CAPES e do CNPq) e o mercado capitalista acadêmico (tendendo a poucas empresas de capital monopolista) acrescentam determinações fundamentais às relações acadêmicas de produção. Índices de avaliação baseados em métricas de contabilidade da pesquisa se legitimam como monopólios epistemológicos da qualidade e se institucionalizam pelas ações coordenadas da CAPES e do CNPq no conjunto do sistema oficial de pós-graduação. Metas de produção são estabelecidas e re-significadas pelos sujeitos. No limite, define-se até mesmo o tipo de ciência que se produz na área. Conclui-se que a resistência aos atuais padrões intensificados de produção acadêmica passa pelo entendimento crítico de todas essas relações que se costuram e se estruturam no interior da pós-graduação.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Formal methods and software testing are tools to obtain and control software quality. When used together, they provide mechanisms for software specification, verification and error detection. Even though formal methods allow software to be mathematically verified, they are not enough to assure that a system is free of faults, thus, software testing techniques are necessary to complement the process of verification and validation of a system. Model Based Testing techniques allow tests to be generated from other software artifacts such as specifications and abstract models. Using formal specifications as basis for test creation, we can generate better quality tests, because these specifications are usually precise and free of ambiguity. Fernanda Souza (2009) proposed a method to define test cases from B Method specifications. This method used information from the machine s invariant and the operation s precondition to define positive and negative test cases for an operation, using equivalent class partitioning and boundary value analysis based techniques. However, the method proposed in 2009 was not automated and had conceptual deficiencies like, for instance, it did not fit in a well defined coverage criteria classification. We started our work with a case study that applied the method in an example of B specification from the industry. Based in this case study we ve obtained subsidies to improve it. In our work we evolved the proposed method, rewriting it and adding characteristics to make it compatible with a test classification used by the community. We also improved the method to support specifications structured in different components, to use information from the operation s behavior on the test case generation process and to use new coverage criterias. Besides, we have implemented a tool to automate the method and we have submitted it to more complex case studies

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The activity of validating identified requirements for an information system helps to improve the quality of a requirements specification document and, consequently, the success of a project. Although various different support tools to requirements engineering exist in the market, there is still a lack of automated support for validation activity. In this context, the purpose of this paper is to make up for that deficiency, with the use of an automated tool, to provide the resources for the execution of an adequate validation activity. The contribution of this study is to enable an agile and effective follow-up of the scope established for the requirements, so as to lead the development to a solution which would satisfy the real necessities of the users, as well as to supply project managers with relevant information about the maturity of the analysts involved in requirements specification.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work describes a new web system to aid project management that was created to correct the principal deficiencies identified in systems having a common purpose which are at present available, as well as to follow the guidelines that are proposed in the Project Management Body of Knowledge (PMBoK) and the quality characteristics described in the ISO/IEC 9126 norm. As from the adopted methodology, the system was structured to attend the real necessities of project managers and also to contribute towards obtaining quality results from the projects. The validation of the proposed solution was done with the collaboration of professionals that used the functions available in it for a period of 15 days. Results attested to the quality and adequacy of the developed system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este trabalho apresenta uma nova abordagem para avaliação automática de consultas SQL. Essa abordagem propõe uma solução para o desafio de estimular o aprendiz a aperfeiçoar a sua solução: buscando, além de uma resposta que retorna o resultado correto, uma consulta com complexidade próxima da solução ótima. Essa proposta pode ser utilizada em ambientes de educação a distancia ou na educação presencial em atividades de laboratório, incluindo as avaliações. A solução proposta tem como vantagens: (1) o aprendiz recebe um feedback instantâneo durante a atividade prática de programação, o qual permite ao aprendiz refatorar a sua solução em direção a uma solução ótima; (2) completa integração entre o ensino de conceitos de programação com exemplo de fragmentos de programas executáveis on-line; (3) monitoramento das atividades do aprendiz (quantos exemplos foram executados; em cada exercício quantas tentativas de execução foram feitas, etc). Este trabalho é um primeiro passo na direção de construção de um ambiente totalmente assistido (por exemplo com avaliação automática) para ensino da linguagem de programação SQL, onde o professor é liberado do árduo trabalho de correção de comandos SQL podendo realizar tarefas pedagógicas mais relevantes. O método, fundamentado em estatística e métricas da Engenharia de Software, pode ser adaptado para outras linguagens tais como Java e Pascal. Além disso, o LabSQL serve com um laboratório para experimentação de duas novas técnicas, uma de avaliação e outra de acompanhamento, que estão sendo pesquisadas em trabalhos em paralelos: (a) avaliação automática de questões conceituais discursivas, além de permitir as tradicionais perguntas objetivas, (b) método de acompanhamento através de montagem de uma rubrica de avaliação.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Uma série de iniciativas para melhoria do processo de software surgiu recentemente visando melhorar a qualidade e a produtividade em organizações de desenvolvimento de software. Alguns modelos e normas têm buscado a implantação de melhorias no processo de desenvolvimento de software, o MPS.BR é um deles. Esse modelo de melhoria de processo é voltado para as micro, pequenas e médias empresas, de forma a atender as suas necessidades de negócio e foi o modelo escolhido para ser explorado nesse trabalho. Várias são as vantagens adquiridas com a implantação de um modelo de melhoria, umas delas é a definição de um processo sistemático de desenvolvimento de software, que auxilie tanto na qualidade e produtividade do processo quanto na qualidade do produto desenvolvido. Com um modelo de processo definido a organização pode contar com diversos benefícios associados à padronização, como, por exemplo, a otimização, a redução de custos com retrabalho, a redução de defeitos nos produtos, dentre outros. Mas não existem modelos prontos que possam ser aplicados diretamente a uma empresa específica de desenvolvimento de software e, por isso, é necessário modelar o processo, customizando-o, com o objetivo final de gerar um modelo que adequadamente represente o processo da organização. Uma das dificuldades para a implantação de modelos como o MPS.BR é a falta de metodologia que mostre como a implantação de melhoria deve ser feita e não apenas o que deve ser feito. Este trabalho propõe uma metodologia para a implementação do modelo MPS.BR baseada no modelo de implantação IDEAL, através de uma ferramenta específica, chamada WebAPSEE. A metodologia foi experimentada no CTIC - Centro de Tecnologia da Informação e Comunicação da UFPA que ao final do trabalho foi avaliado Nível G do MPS.BR.