960 resultados para Computer software maintenance
Resumo:
The vast majority of our contemporary society owns a mobile phone, which has resulted in a dramatic rise in the amount of networked computers in recent years. Security issues in the computers have followed the same trend and nearly everyone is now affected by such issues. How could the situation be improved? For software engineers, an obvious answer is to build computer software with security in mind. A problem with building software with security is how to define secure software or how to measure security. This thesis divides the problem into three research questions. First, how can we measure the security of software? Second, what types of tools are available for measuring security? And finally, what do these tools reveal about the security of software? Measuring tools of these kind are commonly called metrics. This thesis is focused on the perspective of software engineers in the software design phase. Focus on the design phase means that code level semantics or programming language specifics are not discussed in this work. Organizational policy, management issues or software development process are also out of the scope. The first two research problems were studied using a literature review while the third was studied using a case study research. The target of the case study was a Java based email server called Apache James, which had details from its changelog and security issues available and the source code was accessible. The research revealed that there is a consensus in the terminology on software security. Security verification activities are commonly divided into evaluation and assurance. The focus of this work was in assurance, which means to verify one’s own work. There are 34 metrics available for security measurements, of which five are evaluation metrics and 29 are assurance metrics. We found, however, that the general quality of these metrics was not good. Only three metrics in the design category passed the inspection criteria and could be used in the case study. The metrics claim to give quantitative information on the security of the software, but in practice they were limited to evaluating different versions of the same software. Apart from being relative, the metrics were unable to detect security issues or point out problems in the design. Furthermore, interpreting the metrics’ results was difficult. In conclusion, the general state of the software security metrics leaves a lot to be desired. The metrics studied had both theoretical and practical issues, and are not suitable for daily engineering workflows. The metrics studied provided a basis for further research, since they pointed out areas where the security metrics were necessary to improve whether verification of security from the design was desired.
Resumo:
La maintenance du logiciel est une phase très importante du cycle de vie de celui-ci. Après les phases de développement et de déploiement, c’est celle qui dure le plus longtemps et qui accapare la majorité des coûts de l'industrie. Ces coûts sont dus en grande partie à la difficulté d’effectuer des changements dans le logiciel ainsi que de contenir les effets de ces changements. Dans cette perspective, de nombreux travaux ont ciblé l’analyse/prédiction de l’impact des changements sur les logiciels. Les approches existantes nécessitent de nombreuses informations en entrée qui sont difficiles à obtenir. Dans ce mémoire, nous utilisons une approche probabiliste. Des classificateurs bayésiens sont entraînés avec des données historiques sur les changements. Ils considèrent les relations entre les éléments (entrées) et les dépendances entre changements historiques (sorties). Plus spécifiquement, un changement complexe est divisé en des changements élémentaires. Pour chaque type de changement élémentaire, nous créons un classificateur bayésien. Pour prédire l’impact d’un changement complexe décomposé en changements élémentaires, les décisions individuelles des classificateurs sont combinées selon diverses stratégies. Notre hypothèse de travail est que notre approche peut être utilisée selon deux scénarios. Dans le premier scénario, les données d’apprentissage sont extraites des anciennes versions du logiciel sur lequel nous voulons analyser l’impact de changements. Dans le second scénario, les données d’apprentissage proviennent d’autres logiciels. Ce second scénario est intéressant, car il permet d’appliquer notre approche à des logiciels qui ne disposent pas d’historiques de changements. Nous avons réussi à prédire correctement les impacts des changements élémentaires. Les résultats ont montré que l’utilisation des classificateurs conceptuels donne les meilleurs résultats. Pour ce qui est de la prédiction des changements complexes, les méthodes de combinaison "Voting" et OR sont préférables pour prédire l’impact quand le nombre de changements à analyser est grand. En revanche, quand ce nombre est limité, l’utilisation de la méthode Noisy-Or ou de sa version modifiée est recommandée.
Resumo:
One of the fastest expanding areas of computer exploitation is in embedded systems, whose prime function is not that of computing, but which nevertheless require information processing in order to carry out their prime function. Advances in hardware technology have made multi microprocessor systems a viable alternative to uniprocessor systems in many embedded application areas. This thesis reports the results of investigations carried out on multi microprocessors oriented towards embedded applications, with a view to enhancing throughput and reliability. An ideal controller for multiprocessor operation is developed which would smoothen sharing of routines and enable more powerful and efficient code I data interchange. Results of performance evaluation are appended.A typical application scenario is presented, which calls for classifying tasks based on characteristic features that were identified. The different classes are introduced along with a partitioned storage scheme. Theoretical analysis is also given. A review of schemes available for reducing disc access time is carried out and a new scheme presented. This is found to speed up data base transactions in embedded systems. The significance of software maintenance and adaptation in such applications is highlighted. A novel scheme of prov1d1ng a maintenance folio to system firmware is presented, alongwith experimental results. Processing reliability can be enhanced if facility exists to check if a particular instruction in a stream is appropriate. Likelihood of occurrence of a particular instruction would be more prudent if number of instructions in the set is less. A new organisation is derived to form the basement for further work. Some early results that would help steer the course of the work are presented.
Resumo:
We present a system for dynamic network resource configuration in environments with bandwidth reservation. The proposed system is completely distributed and automates the mechanisms for adapting the logical network to the offered load. The system is able to manage dynamically a logical network such as a virtual path network in ATM or a label switched path network in MPLS or GMPLS. The system design and implementation is based on a multi-agent system (MAS) which make the decisions of when and how to change a logical path. Despite the lack of a centralised global network view, results show that MAS manages the network resources effectively, reducing the connection blocking probability and, therefore, achieving better utilisation of network resources. We also include details of its architecture and implementation
Resumo:
Expert supervision systems are software applications specially designed to automate process monitoring. The goal is to reduce the dependency on human operators to assure the correct operation of a process including faulty situations. Construction of this kind of application involves an important task of design and development in order to represent and to manipulate process data and behaviour at different degrees of abstraction for interfacing with data acquisition systems connected to the process. This is an open problem that becomes more complex with the number of variables, parameters and relations to account for the complexity of the process. Multiple specialised modules tuned to solve simpler tasks that operate under a co-ordination provide a solution. A modular architecture based on concepts of software agents, taking advantage of the integration of diverse knowledge-based techniques, is proposed for this purpose. The components (software agents, communication mechanisms and perception/action mechanisms) are based on ICa (Intelligent Control architecture), software middleware supporting the build-up of applications with software agent features
Resumo:
En los textos de Empire y Multitude, Antonio Negri y Michael Hardt proponen que en el mundo actual la fuerza dominante que controla el capitalismo, y así el poder, es el Imperio. El Imperio obtiene su fuerza a través del control de la producción intelectual y su poder está ere cien - do durante este período de transición en el modelo capitalista. En este ensayo, se argumenta que los oprimidos por el Imperio, quienes conforman como clase la multitud, necesitan el software libre para crear su sueño: la democracia. Este software es a la vez el mejor ejemplo de como puede ser la democracia y una herramienta que permite la ampliación de ella. Además, su potencial en la región andina es todavía mayor por la debilidad del modelo de democracia liberal que promociona el Imperio.
Resumo:
Consider the statement "this project should cost X and has risk of Y". Such statements are used daily in industry as the basis for making decisions. The work reported here is part of a study aimed at providing a rational and pragmatic basis for such statements. Of particular interest are predictions made in the requirements and early phases of projects. A preliminary model has been constructed using Bayesian Belief Networks and in support of this, a programme to collect and study data during the execution of various software development projects commenced in May 2002. The data collection programme is undertaken under the constraints of a commercial industrial regime of multiple concurrent small to medium scale software development projects. Guided by pragmatism, the work is predicated on the use of data that can be collected readily by project managers; including expert judgements, effort, elapsed times and metrics collected within each project.
Resumo:
The SPE taxonomy of evolving software systems, first proposed by Lehman in 1980, is re-examined in this work. The primary concepts of software evolution are related to generic theories of evolution, particularly Dawkins' concept of a replicator, to the hermeneutic tradition in philosophy and to Kuhn's concept of paradigm. These concepts provide the foundations that are needed for understanding the phenomenon of software evolution and for refining the definitions of the SPE categories. In particular, this work argues that a software system should be defined as of type P if its controlling stakeholders have made a strategic decision that the system must comply with a single paradigm in its representation of domain knowledge. The proposed refinement of SPE is expected to provide a more productive basis for developing testable hypotheses and models about possible differences in the evolution of E- and P-type systems than is provided by the original scheme. Copyright (C) 2005 John Wiley & Sons, Ltd.