947 resultados para Factory of software
Resumo:
The problem of software (SW) defaults is becoming more and more topical because of increasing amount of the SW and its complication. The majority of these defaults are founded during the test part that consumes about 40-50% of the development efforts. Test automation allows reducing the cost of this process and increasing testing effectiveness. In the middle of 1980 the first tools for automated testing appeared and the automated process was implemented in different kinds of SW testing. In short time, it became obviously, automated testing can cause many problems such as increasing product cost, decreasing reliability and even project fail. This thesis describes automated testing process, its concept, lists main problems, and gives an algorithm for automated test tools selection. Also this work presents an overview of the main automated test tools for embedded systems.
Resumo:
Increase of computational power and emergence of new computer technologies led to popularity of local communications between personal trusted devices. By-turn, it led to emergence of security problems related to user data utilized in such communications. One of the main aspects of the data security assurance is security of software operating on mobile devices. The aim of this work was to analyze security threats to PeerHood, software intended for performing personal communications between mobile devices regardless of underlying network technologies. To reach this goal, risk-based software security testing was performed. The results of the testing showed that the project has several security vulnerabilities. So PeerHood cannot be considered as a secure software. The analysis made in the work is the first step towards the further implementation of PeerHood security mechanisms, as well as taking into account security in the development process of this project.
Resumo:
Object-oriented programming is a widely adopted paradigm for desktop software development. This paradigm partitions software into separate entities, objects, which consist of data and related procedures used to modify and inspect it. The paradigm has evolved during the last few decades to emphasize decoupling between object implementations, via means such as explicit interface inheritance and event-based implicit invocation. Inter-process communication (IPC) technologies allow applications to interact with each other. This enables making software distributed across multiple processes, resulting in a modular architecture with benefits in resource sharing, robustness, code reuse and security. The support for object-oriented programming concepts varies between IPC systems. This thesis is focused on the D-Bus system, which has recently gained a lot of users, but is still scantily researched. D-Bus has support for asynchronous remote procedure calls with return values and a content-based publish/subscribe event delivery mechanism. In this thesis, several patterns for method invocation in D-Bus and similar systems are compared. The patterns that simulate synchronous local calls are shown to be dangerous. Later, we present a state-caching proxy construct, which avoids the complexity of properly asynchronous calls for object inspection. The proxy and certain supplementary constructs are presented conceptually as generic object-oriented design patterns. The e ect of these patterns on non-functional qualities of software, such as complexity, performance and power consumption, is reasoned about based on the properties of the D-Bus system. The use of the patterns reduces complexity, but maintains the other qualities at a good level. Finally, we present currently existing means of specifying D-Bus object interfaces for the purposes of code and documentation generation. The interface description language used by the Telepathy modular IM/VoIP framework is found to be an useful extension of the basic D-Bus introspection format.
Resumo:
Formal software development processes and well-defined development methodologies are nowadays seen as the definite way to produce high-quality software within time-limits and budgets. The variety of such high-level methodologies is huge ranging from rigorous process frameworks like CMMI and RUP to more lightweight agile methodologies. The need for managing this variety and the fact that practically every software development organization has its own unique set of development processes and methods have created a profession of software process engineers. Different kinds of informal and formal software process modeling languages are essential tools for process engineers. These are used to define processes in a way which allows easy management of processes, for example process dissemination, process tailoring and process enactment. The process modeling languages are usually used as a tool for process engineering where the main focus is on the processes themselves. This dissertation has a different emphasis. The dissertation analyses modern software development process modeling from the software developers’ point of view. The goal of the dissertation is to investigate whether the software process modeling and the software process models aid software developers in their day-to-day work and what are the main mechanisms for this. The focus of the work is on the Software Process Engineering Metamodel (SPEM) framework which is currently one of the most influential process modeling notations in software engineering. The research theme is elaborated through six scientific articles which represent the dissertation research done with process modeling during an approximately five year period. The research follows the classical engineering research discipline where the current situation is analyzed, a potentially better solution is developed and finally its implications are analyzed. The research applies a variety of different research techniques ranging from literature surveys to qualitative studies done amongst software practitioners. The key finding of the dissertation is that software process modeling notations and techniques are usually developed in process engineering terms. As a consequence the connection between the process models and actual development work is loose. In addition, the modeling standards like SPEM are partially incomplete when it comes to pragmatic process modeling needs, like light-weight modeling and combining pre-defined process components. This leads to a situation, where the full potential of process modeling techniques for aiding the daily development activities can not be achieved. Despite these difficulties the dissertation shows that it is possible to use modeling standards like SPEM to aid software developers in their work. The dissertation presents a light-weight modeling technique, which software development teams can use to quickly analyze their work practices in a more objective manner. The dissertation also shows how process modeling can be used to more easily compare different software development situations and to analyze their differences in a systematic way. Models also help to share this knowledge with others. A qualitative study done amongst Finnish software practitioners verifies the conclusions of other studies in the dissertation. Although processes and development methodologies are seen as an essential part of software development, the process modeling techniques are rarely used during the daily development work. However, the potential of these techniques intrigues the practitioners. As a conclusion the dissertation shows that process modeling techniques, most commonly used as tools for process engineers, can also be used as tools for organizing the daily software development work. This work presents theoretical solutions for bringing the process modeling closer to the ground-level software development activities. These theories are proven feasible by presenting several case studies where the modeling techniques are used e.g. to find differences in the work methods of the members of a software team and to share the process knowledge to a wider audience.
Resumo:
-
Resumo:
The evolution of digital circuit technology, leadind to higher speeds and more reliability allowed the development of machine controllers adapted to new production systems (e.g., Flexible Manufacturing Systems - FMS). Most of the controllers are developed in agreement with the CNC technology of the correspondent machine tool manufacturer. Any alterations or adaptation of their components are not easy to be implemented. The machine designers face up hardware and software restrictions such as lack of interaction among system's elements and impossibility of adding new function. This is due to hardware incompatibility and to software not allowing alterations in the source program. The introduction of open architecture philosophy propitiated the evolution of a new generation of numeric controllers. This brought the conventional CNC technology to the standard IBM - PC microcomputer. As a consequence, the characteristics of the CNC (positioning) and the microcomputer (easy of programming, system configuration, network communication etc) are combined. Some researchers have addressed a flexible structure of software and hardware allowing changes in the hardware basic configuration and all control software levels. In this work, the development of open architecture controllers in the OSACA, OMAC, HOAM-CNC and OSEC architectures is described.
Resumo:
The portfolio as a means of demonstrating personal skills has lately been gaining prominence among technology students. This is partially due to the introduction of electronic portfolios, or e-portfolios. As platforms for e-portfolio management with different approaches have been introduced, the learning cycle, traditional portfolio pedagogy, and learner centricity have sometimes been forgotten, and as a result, the tools have been used for the most part as data depositories. The purpose of this thesis is to show how the construction of e-portfolios of IT students can be supported by institutions through the usage of different tools that relate to study advising, teaching, and learning. The construction process is presented as a cycle based on learning theories. Actions related to the various phases of the e-portfolio construction process are supported by the implementation of software applications. To maximize learner-centricity and minimize the intervention of the institution, the evaluated and controlled actions for these practices can be separated from the e-portfolios, leaving the construction of the e-portfolio to students. The main contributions of this thesis are the implemented applications, which can be considered to support the e-portfolio construction by assisting in planning, organizing, and reflecting activities. Eventually, this supports the students in their construction of better and more extensive e-portfolios. The implemented tools include 1) JobSkillSearcher to help students’ recognition of the demands of the ICT industry regarding skills, 2) WebTUTOR to support students’ personal study planning, 3) Learning Styles to determine students' learning styles, and 4) MyPeerReview to provide a platform on which to carry out anonymous peer review processes in courses. The most visible outcome concerning the e-portfolio is its representation, meaning that one can use it to demonstrate personal achievements at the time of seeking a job and gaining employment. Testing the tools and the selected open-source e-portfolio application indicates that the degree of richness of e-portfolio content can be increased by using the implemented applications.
Resumo:
Software plays an important role in our society and economy. Software development is an intricate process, and it comprises many different tasks: gathering requirements, designing new solutions that fulfill these requirements, as well as implementing these designs using a programming language into a working system. As a consequence, the development of high quality software is a core problem in software engineering. This thesis focuses on the validation of software designs. The issue of the analysis of designs is of great importance, since errors originating from designs may appear in the final system. It is considered economical to rectify the problems as early in the software development process as possible. Practitioners often create and visualize designs using modeling languages, one of the more popular being the Uni ed Modeling Language (UML). The analysis of the designs can be done manually, but in case of large systems, the need of mechanisms that automatically analyze these designs arises. In this thesis, we propose an automatic approach to analyze UML based designs using logic reasoners. This approach firstly proposes the translations of the UML based designs into a language understandable by reasoners in the form of logic facts, and secondly shows how to use the logic reasoners to infer the logical consequences of these logic facts. We have implemented the proposed translations in the form of a tool that can be used with any standard compliant UML modeling tool. Moreover, we authenticate the proposed approach by automatically validating hundreds of UML based designs that consist of thousands of model elements available in an online model repository. The proposed approach is limited in scope, but is fully automatic and does not require any expertise of logic languages from the user. We exemplify the proposed approach with two applications, which include the validation of domain specific languages and the validation of web service interfaces.
Resumo:
A web service is a software system that provides a machine-processable interface to the other machines over the network using different Internet protocols. They are being increasingly used in the industry in order to automate different tasks and offer services to a wider audience. The REST architectural style aims at producing scalable and extensible web services using technologies that play well with the existing tools and infrastructure of the web. It provides a uniform set of operation that can be used to invoke a CRUD interface (create, retrieve, update and delete) of a web service. The stateless behavior of the service interface requires that every request to a resource is independent of the previous ones facilitating scalability. Automated systems, e.g., hotel reservation systems, provide advanced scenarios for stateful services that require a certain sequence of requests that must be followed in order to fulfill the service goals. Designing and developing such services for advanced scenarios with REST constraints require rigorous approaches that are capable of creating web services that can be trusted for their behavior. Systems that can be trusted for their behavior can be termed as dependable systems. This thesis presents an integrated design, analysis and validation approach that facilitates the service developer to create dependable and stateful REST web services. The main contribution of this thesis is that we provide a novel model-driven methodology to design behavioral REST web service interfaces and their compositions. The behavioral interfaces provide information on what methods can be invoked on a service and the pre- and post-conditions of these methods. The methodology uses Unified Modeling Language (UML), as the modeling language, which has a wide user base and has mature tools that are continuously evolving. We have used UML class diagram and UML state machine diagram with additional design constraints to provide resource and behavioral models, respectively, for designing REST web service interfaces. These service design models serve as a specification document and the information presented in them have manifold applications. The service design models also contain information about the time and domain requirements of the service that can help in requirement traceability which is an important part of our approach. Requirement traceability helps in capturing faults in the design models and other elements of software development environment by tracing back and forth the unfulfilled requirements of the service. The information about service actors is also included in the design models which is required for authenticating the service requests by authorized actors since not all types of users have access to all the resources. In addition, following our design approach, the service developer can ensure that the designed web service interfaces will be REST compliant. The second contribution of this thesis is consistency analysis of the behavioral REST interfaces. To overcome the inconsistency problem and design errors in our service models, we have used semantic technologies. The REST interfaces are represented in web ontology language, OWL2, that can be part of the semantic web. These interfaces are used with OWL 2 reasoners to check unsatisfiable concepts which result in implementations that fail. This work is fully automated thanks to the implemented translation tool and the existing OWL 2 reasoners. The third contribution of this thesis is the verification and validation of REST web services. We have used model checking techniques with UPPAAL model checker for this purpose. The timed automata of UML based service design models are generated with our transformation tool that are verified for their basic characteristics like deadlock freedom, liveness, reachability and safety. The implementation of a web service is tested using a black-box testing approach. Test cases are generated from the UPPAAL timed automata and using the online testing tool, UPPAAL TRON, the service implementation is validated at runtime against its specifications. Requirement traceability is also addressed in our validation approach with which we can see what service goals are met and trace back the unfulfilled service goals to detect the faults in the design models. A final contribution of the thesis is an implementation of behavioral REST interfaces and service monitors from the service design models. The partial code generation tool creates code skeletons of REST web services with method pre and post-conditions. The preconditions of methods constrain the user to invoke the stateful REST service under the right conditions and the post condition constraint the service developer to implement the right functionality. The details of the methods can be manually inserted by the developer as required. We do not target complete automation because we focus only on the interface aspects of the web service. The applicability of the approach is demonstrated with a pedagogical example of a hotel room booking service and a relatively complex worked example of holiday booking service taken from the industrial context. The former example presents a simple explanation of the approach and the later worked example shows how stateful and timed web services offering complex scenarios and involving other web services can be constructed using our approach.
Resumo:
The purpose of this study was to explore software development methods and quality assurance practices used by South Korean software industry. Empirical data was collected by conducting a survey that focused on three main parts: software life cycle models and methods, software quality assurance including quality standards, the strengths and weaknesses of South Korean software industry. The results of the completed survey showed that the use of agile methods is slightly surpassing the use of traditional software development methods. The survey also revealed an interesting result that almost half of the South Korean companies do not use any software quality assurance plan in their projects. For the state of South Korean software industry large number of the respondents thought that despite of the weakness, the status of software development in South Korea will improve in the future.
Resumo:
This study examines information security as a process (information securing) in terms of what it does, especially beyond its obvious role of protector. It investigates concepts related to ‘ontology of becoming’, and examines what it is that information securing produces. The research is theory driven and draws upon three fields: sociology (especially actor-network theory), philosophy (especially Gilles Deleuze and Félix Guattari’s concept of ‘machine’, ‘territory’ and ‘becoming’, and Michel Serres’s concept of ‘parasite’), and information systems science (the subject of information security). Social engineering (used here in the sense of breaking into systems through non-technical means) and software cracker groups (groups which remove copy protection systems from software) are analysed as examples of breaches of information security. Firstly, the study finds that information securing is always interruptive: every entity (regardless of whether or not it is malicious) that becomes connected to information security is interrupted. Furthermore, every entity changes, becomes different, as it makes a connection with information security (ontology of becoming). Moreover, information security organizes entities into different territories. However, the territories – the insides and outsides of information systems – are ontologically similar; the only difference is in the order of the territories, not in the ontological status of entities that inhabit the territories. In other words, malicious software is ontologically similar to benign software; they both are users in terms of a system. The difference is based on the order of the system and users: who uses the system and what the system is used for. Secondly, the research shows that information security is always external (in the terms of this study it is a ‘parasite’) to the information system that it protects. Information securing creates and maintains order while simultaneously disrupting the existing order of the system that it protects. For example, in terms of software itself, the implementation of a copy protection system is an entirely external addition. In fact, this parasitic addition makes software different. Thus, information security disrupts that which it is supposed to defend from disruption. Finally, it is asserted that, in its interruption, information security is a connector that creates passages; it connects users to systems while also creating its own threats. For example, copy protection systems invite crackers and information security policies entice social engineers to use and exploit information security techniques in a novel manner.
Resumo:
Software quality has become an important research subject, not only in the Information and Communication Technology spheres, but also in other industries at large where software is applied. Software quality is not a happenstance; it is defined, planned and created into the software product throughout the Software Development Life Cycle. The research objective of this study is to investigate the roles of human and organizational factors that influence software quality construction. The study employs the Straussian grounded theory. The empirical data has been collected from 13 software companies, and the data includes 40 interviews. The results of the study suggest that tools, infrastructure and other resources have a positive impact on software quality, but human factors involved in the software development processes will determine the quality of the products developed. On the other hand, methods of development were found to bring little effect on software quality. The research suggests that software quality is an information-intensive process whereby organizational structures, mode of operation, and information flow within the company variably affect software quality. The results also suggest that software development managers influence the productivity of developers and the quality of the software products. Several challenges of software testing that affect software quality are also brought to light. The findings of this research are expected to benefit the academic community and software practitioners by providing an insight into the issues pertaining to software quality construction undertakings.
Resumo:
Many-core systems provide a great potential in application performance with the massively parallel structure. Such systems are currently being integrated into most parts of daily life from high-end server farms to desktop systems, laptops and mobile devices. Yet, these systems are facing increasing challenges such as high temperature causing physical damage, high electrical bills both for servers and individual users, unpleasant noise levels due to active cooling and unrealistic battery drainage in mobile devices; factors caused directly by poor energy efficiency. Power management has traditionally been an area of research providing hardware solutions or runtime power management in the operating system in form of frequency governors. Energy awareness in application software is currently non-existent. This means that applications are not involved in the power management decisions, nor does any interface between the applications and the runtime system to provide such facilities exist. Power management in the operating system is therefore performed purely based on indirect implications of software execution, usually referred to as the workload. It often results in over-allocation of resources, hence power waste. This thesis discusses power management strategies in many-core systems in the form of increasing application software awareness of energy efficiency. The presented approach allows meta-data descriptions in the applications and is manifested in two design recommendations: 1) Energy-aware mapping 2) Energy-aware execution which allow the applications to directly influence the power management decisions. The recommendations eliminate over-allocation of resources and increase the energy efficiency of the computing system. Both recommendations are fully supported in a provided interface in combination with a novel power management runtime system called Bricktop. The work presented in this thesis allows both new- and legacy software to execute with the most energy efficient mapping on a many-core CPU and with the most energy efficient performance level. A set of case study examples demonstrate realworld energy savings in a wide range of applications without performance degradation.
Resumo:
Abstract Software product metrics aim at measuring the quality of software. Modu- larity is an essential factor in software quality. In this work, metrics related to modularity and especially cohesion of the modules, are considered. The existing metrics are evaluated, and several new alternatives are proposed. The idea of cohesion of modules is that a module or a class should consist of related parts. The closely related principle of coupling says that the relationships between modules should be minimized. First, internal cohesion metrics are considered. The relations that are internal to classes are shown to be useless for quality measurement. Second, we consider external relationships for cohesion. A detailed analysis using design patterns and refactorings confirms that external cohesion is a better quality indicator than internal. Third, motivated by the successes (and problems) of external cohesion metrics, another kind of metric is proposed that represents the quality of modularity of software. This metric can be applied to refactorings related to classes, resulting in a refactoring suggestion system. To describe the metrics formally, a notation for programs is developed. Because of the recursive nature of programming languages, the properties of programs are most compactly represented using grammars and formal lan- guages. Also the tools that were used for metrics calculation are described.
Resumo:
Un objectif principal du génie logiciel est de pouvoir produire des logiciels complexes, de grande taille et fiables en un temps raisonnable. La technologie orientée objet (OO) a fourni de bons concepts et des techniques de modélisation et de programmation qui ont permis de développer des applications complexes tant dans le monde académique que dans le monde industriel. Cette expérience a cependant permis de découvrir les faiblesses du paradigme objet (par exemples, la dispersion de code et le problème de traçabilité). La programmation orientée aspect (OA) apporte une solution simple aux limitations de la programmation OO, telle que le problème des préoccupations transversales. Ces préoccupations transversales se traduisent par la dispersion du même code dans plusieurs modules du système ou l’emmêlement de plusieurs morceaux de code dans un même module. Cette nouvelle méthode de programmer permet d’implémenter chaque problématique indépendamment des autres, puis de les assembler selon des règles bien définies. La programmation OA promet donc une meilleure productivité, une meilleure réutilisation du code et une meilleure adaptation du code aux changements. Très vite, cette nouvelle façon de faire s’est vue s’étendre sur tout le processus de développement de logiciel en ayant pour but de préserver la modularité et la traçabilité, qui sont deux propriétés importantes des logiciels de bonne qualité. Cependant, la technologie OA présente de nombreux défis. Le raisonnement, la spécification, et la vérification des programmes OA présentent des difficultés d’autant plus que ces programmes évoluent dans le temps. Par conséquent, le raisonnement modulaire de ces programmes est requis sinon ils nécessiteraient d’être réexaminés au complet chaque fois qu’un composant est changé ou ajouté. Il est cependant bien connu dans la littérature que le raisonnement modulaire sur les programmes OA est difficile vu que les aspects appliqués changent souvent le comportement de leurs composantes de base [47]. Ces mêmes difficultés sont présentes au niveau des phases de spécification et de vérification du processus de développement des logiciels. Au meilleur de nos connaissances, la spécification modulaire et la vérification modulaire sont faiblement couvertes et constituent un champ de recherche très intéressant. De même, les interactions entre aspects est un sérieux problème dans la communauté des aspects. Pour faire face à ces problèmes, nous avons choisi d’utiliser la théorie des catégories et les techniques des spécifications algébriques. Pour apporter une solution aux problèmes ci-dessus cités, nous avons utilisé les travaux de Wiels [110] et d’autres contributions telles que celles décrites dans le livre [25]. Nous supposons que le système en développement est déjà décomposé en aspects et classes. La première contribution de notre thèse est l’extension des techniques des spécifications algébriques à la notion d’aspect. Deuxièmement, nous avons défini une logique, LA , qui est utilisée dans le corps des spécifications pour décrire le comportement de ces composantes. La troisième contribution consiste en la définition de l’opérateur de tissage qui correspond à la relation d’interconnexion entre les modules d’aspect et les modules de classe. La quatrième contribution concerne le développement d’un mécanisme de prévention qui permet de prévenir les interactions indésirables dans les systèmes orientés aspect.