936 resultados para Software development process


Relevância:

100.00% 100.00%

Publicador:

Resumo:

During recent years, mobile phone markets have changed significantly. Asian markets have become vital for the manufacturers with their millions of end users and multiple major mobile network operators. This has resulted in software development as global companies have research and development sites running in multiple locations, including Asia. The reasons behind this are not only in reducing labor costs but also in capitalizing on the local knowledge and knowhow. A ramp-up site has multiple effects in the software development and software release activities. This thesis focuses on representing the importance of software testing as part of software development process and highlighting issues that need to be considered during ramp-up activities. In addition this work tries to emphasize the importance of communication between parties and information gathering prior to setting up the ramp-up site. The output of this thesis was successful software testing site ramp-up within the set time limits. The quality of software testing work was assured and the ramp-up -project requirements were achieved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Software plays an important role in our society and economy. Software development is an intricate process, and it comprises many different tasks: gathering requirements, designing new solutions that fulfill these requirements, as well as implementing these designs using a programming language into a working system. As a consequence, the development of high quality software is a core problem in software engineering. This thesis focuses on the validation of software designs. The issue of the analysis of designs is of great importance, since errors originating from designs may appear in the final system. It is considered economical to rectify the problems as early in the software development process as possible. Practitioners often create and visualize designs using modeling languages, one of the more popular being the Uni ed Modeling Language (UML). The analysis of the designs can be done manually, but in case of large systems, the need of mechanisms that automatically analyze these designs arises. In this thesis, we propose an automatic approach to analyze UML based designs using logic reasoners. This approach firstly proposes the translations of the UML based designs into a language understandable by reasoners in the form of logic facts, and secondly shows how to use the logic reasoners to infer the logical consequences of these logic facts. We have implemented the proposed translations in the form of a tool that can be used with any standard compliant UML modeling tool. Moreover, we authenticate the proposed approach by automatically validating hundreds of UML based designs that consist of thousands of model elements available in an online model repository. The proposed approach is limited in scope, but is fully automatic and does not require any expertise of logic languages from the user. We exemplify the proposed approach with two applications, which include the validation of domain specific languages and the validation of web service interfaces.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Onnistuneesti suoritettu suorituskyvyn mittaaminen ja johtaminen tuovat kirjallisuuden mukaan organisaatiolle monia hyötyjä. Ohjelmistotyön suorituskyky vaikuttaa ohjelmistoyritysten kannattavuuteen ja ohjelmointiprojektien tuloksellisuuteen. Ohjelmistotyön suorituskyvyn parantamisessa on suurelta osin keskitytty prosessien parantamiseen. Ohjelmistotyön suorituskyvyn taustalla on kuitenkin paljon muitakin tekijöitä kuin prosessi-indikaattorit. Sitoutuneisuus ja motivoituneisuus nähdään yhä tärkeämpinä tekijöinä ohjelmistotyön suorituskyvyn taustalla, joten suorituskyvyn johtamisen tulee huomioida nykyistä paremmin myös henkilöstön näkökulma. Tämän tutkimuksen tavoitteena oli tutkia suorituskyvyn johtamisen viitekehysten, ohjelmistotyön suorituskyvyn taustatekijöiden, motivaation merkityksen ja johtamistyylien analysoinnin avulla, millainen suorituskyvyn mittaus- ja johtamisjärjestelmä (PMS) tukisi ohjelmistotyön suorituskyvyn johtamista huomioiden henkilöstön näkökulman. Tutkimuksessa analysoitiin aiempia aihepiiriä koskevia tutkimuksia ja lisäksi haastateltiin alan yritysasiantuntijoita. Tutkimuksen tuloksena esitettiin tärkeimmät ohjelmistotyön suorituskyvyn taustatekijät, joiden tilan parantamista suorituskyvyn johtamisen tulee mahdollistaa. Näiden havaittiin olevan läheisessä suhteessa henkilöstön motivaatiotekijöihin, joiden sitouttavaa kehittymistä johtamisen tulee myös tukea. Tulokset kiteytettiin suosituksiin koskien johtamista ja mittaristomallia, joita voidaan hyödyntää ohjelmistotyön suorituskyvyn johtamisessa huomioiden henkilöstön näkökulma. Mallissa on kuvattu mitattavat ja johdettavat tekijät yksilö- ja tiimitasolla, esimiestyössä sekä henkilöstövoimavarojen johtamisessa (HRM).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Un objectif principal du génie logiciel est de pouvoir produire des logiciels complexes, de grande taille et fiables en un temps raisonnable. La technologie orientée objet (OO) a fourni de bons concepts et des techniques de modélisation et de programmation qui ont permis de développer des applications complexes tant dans le monde académique que dans le monde industriel. Cette expérience a cependant permis de découvrir les faiblesses du paradigme objet (par exemples, la dispersion de code et le problème de traçabilité). La programmation orientée aspect (OA) apporte une solution simple aux limitations de la programmation OO, telle que le problème des préoccupations transversales. Ces préoccupations transversales se traduisent par la dispersion du même code dans plusieurs modules du système ou l’emmêlement de plusieurs morceaux de code dans un même module. Cette nouvelle méthode de programmer permet d’implémenter chaque problématique indépendamment des autres, puis de les assembler selon des règles bien définies. La programmation OA promet donc une meilleure productivité, une meilleure réutilisation du code et une meilleure adaptation du code aux changements. Très vite, cette nouvelle façon de faire s’est vue s’étendre sur tout le processus de développement de logiciel en ayant pour but de préserver la modularité et la traçabilité, qui sont deux propriétés importantes des logiciels de bonne qualité. Cependant, la technologie OA présente de nombreux défis. Le raisonnement, la spécification, et la vérification des programmes OA présentent des difficultés d’autant plus que ces programmes évoluent dans le temps. Par conséquent, le raisonnement modulaire de ces programmes est requis sinon ils nécessiteraient d’être réexaminés au complet chaque fois qu’un composant est changé ou ajouté. Il est cependant bien connu dans la littérature que le raisonnement modulaire sur les programmes OA est difficile vu que les aspects appliqués changent souvent le comportement de leurs composantes de base [47]. Ces mêmes difficultés sont présentes au niveau des phases de spécification et de vérification du processus de développement des logiciels. Au meilleur de nos connaissances, la spécification modulaire et la vérification modulaire sont faiblement couvertes et constituent un champ de recherche très intéressant. De même, les interactions entre aspects est un sérieux problème dans la communauté des aspects. Pour faire face à ces problèmes, nous avons choisi d’utiliser la théorie des catégories et les techniques des spécifications algébriques. Pour apporter une solution aux problèmes ci-dessus cités, nous avons utilisé les travaux de Wiels [110] et d’autres contributions telles que celles décrites dans le livre [25]. Nous supposons que le système en développement est déjà décomposé en aspects et classes. La première contribution de notre thèse est l’extension des techniques des spécifications algébriques à la notion d’aspect. Deuxièmement, nous avons défini une logique, LA , qui est utilisée dans le corps des spécifications pour décrire le comportement de ces composantes. La troisième contribution consiste en la définition de l’opérateur de tissage qui correspond à la relation d’interconnexion entre les modules d’aspect et les modules de classe. La quatrième contribution concerne le développement d’un mécanisme de prévention qui permet de prévenir les interactions indésirables dans les systèmes orientés aspect.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Code clones are portions of source code which are similar to the original program code. The presence of code clones is considered as a bad feature of software as the maintenance of software becomes difficult due to the presence of code clones. Methods for code clone detection have gained immense significance in the last few years as they play a significant role in engineering applications such as analysis of program code, program understanding, plagiarism detection, error detection, code compaction and many more similar tasks. Despite of all these facts, several features of code clones if properly utilized can make software development process easier. In this work, we have pointed out such a feature of code clones which highlight the relevance of code clones in test sequence identification. Here program slicing is used in code clone detection. In addition, a classification of code clones is presented and the benefit of using program slicing in code clone detection is also mentioned in this work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Kern der vorliegenden Arbeit ist die Erforschung von Methoden, Techniken und Werkzeugen zur Fehlersuche in modellbasierten Softwareentwicklungsprozessen. Hierzu wird zuerst ein von mir mitentwickelter, neuartiger und modellbasierter Softwareentwicklungsprozess, der sogenannte Fujaba Process, vorgestellt. Dieser Prozess wird von Usecase Szenarien getrieben, die durch spezielle Kollaborationsdiagramme formalisiert werden. Auch die weiteren Artefakte des Prozess bishin zur fertigen Applikation werden durch UML Diagrammarten modelliert. Es ist keine Programmierung im Quelltext nötig. Werkzeugunterstützung für den vorgestellte Prozess wird von dem Fujaba CASE Tool bereitgestellt. Große Teile der Werkzeugunterstützung für den Fujaba Process, darunter die Toolunterstützung für das Testen und Debuggen, wurden im Rahmen dieser Arbeit entwickelt. Im ersten Teil der Arbeit wird der Fujaba Process im Detail erklärt und unsere Erfahrungen mit dem Einsatz des Prozesses in Industrieprojekten sowie in der Lehre dargestellt. Der zweite Teil beschreibt die im Rahmen dieser Arbeit entwickelte Testgenerierung, die zu einem wichtigen Teil des Fujaba Process geworden ist. Hierbei werden aus den formalisierten Usecase Szenarien ausführbare Testfälle generiert. Es wird das zugrunde liegende Konzept, die konkrete technische Umsetzung und die Erfahrungen aus der Praxis mit der entwickelten Testgenerierung dargestellt. Der letzte Teil beschäftigt sich mit dem Debuggen im Fujaba Process. Es werden verschiedene im Rahmen dieser Arbeit entwickelte Konzepte und Techniken vorgestellt, die die Fehlersuche während der Applikationsentwicklung vereinfachen. Hierbei wurde darauf geachtet, dass das Debuggen, wie alle anderen Schritte im Fujaba Process, ausschließlich auf Modellebene passiert. Unter anderem werden Techniken zur schrittweisen Ausführung von Modellen, ein Objekt Browser und ein Debugger, der die rückwärtige Ausführung von Programmen erlaubt (back-in-time debugging), vorgestellt. Alle beschriebenen Konzepte wurden in dieser Arbeit als Plugins für die Eclipse Version von Fujaba, Fujaba4Eclipse, implementiert und erprobt. Bei der Implementierung der Plugins wurde auf eine enge Integration mit Fujaba zum einen und mit Eclipse auf der anderen Seite geachtet. Zusammenfassend wird also ein Entwicklungsprozess vorgestellt, die Möglichkeit in diesem mit automatischen Tests Fehler zu identifizieren und diese Fehler dann mittels spezieller Debuggingtechniken im Programm zu lokalisieren und schließlich zu beheben. Dabei läuft der komplette Prozess auf Modellebene ab. Für die Test- und Debuggingtechniken wurden in dieser Arbeit Plugins für Fujaba4Eclipse entwickelt, die den Entwickler bestmöglich bei der zugehörigen Tätigkeit unterstützen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A capacidade das empresas multinacionais (EMN) de se adaptar às oportunidades e ameaças de seus mercados é um dos fatores estratégicos de maior importância na dinâmica dos negócios atuais. Essa necessidade de adaptação não se restringe apenas aos países desenvolvidos, mas também aos países emergentes. Nesse contexto faz-se necessário a compreensão da estratégia de empresas multinacionais dos países emergentes através do estudo das capacidades dinâmicas e de seus modelos de gestão. Assim este trabalho teve como objetivo geral analisar o desenvolvimento e a transferência das capacidades dinâmicas entre matriz e subsidiárias. Serviu como fundamento teórico deste estudo, a visão baseada em recursos, a ambidestralidade das organizações (exploration x explotation), as capacidades dinâmicas e os modelos estratégicos de gestão de EMNs. A definição escolhida para as capacidades dinâmicas, “habilidades sistemáticas da organização de integrar, construir e reconfigurar suas competências organizacionais de acordo com as ameaças e oportunidades do mercado”, serviu como base para toda a pesquisa. Em termos metodológicos foi utilizada a abordagem qualitativa, através de estudo de caso único como método de pesquisa. Após definição de alguns critérios (EMNs brasileiras, setor de tecnologia, mais de uma subsidiária e atuação em diversos segmentos) para definição da organização a ser estudada, optou-se pela escolha da empresa Alpha. A coleta de dados sobre a Alpha foi baseada em múltiplas fontes, através de informações secundárias e entrevistas em profundidade realizadas com pessoas chaves da organização para investigar de forma direta os processos, competências e recursos existentes na organização. Foram encontradas duas capacidades dinâmicas na Alpha: “processo de desenvolvimento de software” e “desenvolvimento de novos serviços”. A primeira CD está presente na matriz e nas subsidiarias. Sua transferência se deu de maneira integral a todas as subsidiárias e teve como fatores antecedentes à integração e o contexto competitivo. A segunda CD está presente somente na matriz e teve como principais fatores antecedentes a orientação empreendedora e as iniciativas. Sua transferência não ocorreu para qualquer subsidiária. Após a análise dos resultados pôde-se concluir que as duas capacidades dinâmicas, em especial a CD processo de desenvolvimento de software, é geradora de vantagem competitiva para a Alpha.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The nature of this thesis is interventionist and aims to create an alternative on how to control and evaluate the public policies implementation developed at the Institute for Technical Assistance and Rural Extension of Rio Grande do Norte State. The cenarium takes place in a public institution , classified as a municipality that belongs to the Rio Grande do Norte government and adopts the design science research methodology , where it generates a set of artifacts that guide the development of a computerized information system . To ensure the decisions, the literature was reviewed aiming to bring and highlight concepts that will be used as base to build the intervention. The use of an effective methodology called Iconix systems analysis , provides a software development process in a short time . As a result of many artifacts created by the methodology there is a software computer able of running on the Internet environment with G2C behavior, it is suggested as a management tool for monitoring artifacts generated by the various methods. Moreover, it reveals barriers faced in the public companies environment such as lack of infrastructure , the strength of the workforce and the executives behavior

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aspect Oriented approaches associated to different activities of the software development process are, in general, independent and their models and artifacts are not aligned and inserted in a coherent process. In the model driven development, the various models and the correspondence between them are rigorously specified. With the integration of aspect oriented software development (DSOA) and model driven development (MDD) it is possible to automatically propagate models from one activity to another, avoiding the loss of information and important decisions established in each activity. This work presents MARISA-MDD, a strategy based on models that integrate aspect-oriented requirements, architecture and detailed design, using the languages AOV-graph, AspectualACME and aSideML, respectively. MARISA-MDD defines, for each activity, representative models (and corresponding metamodels) and a number of transformations between the models of each language. These transformations have been specified and implemented in ATL (Atlas Definition Language), in the Eclipse environment. MARISA-MDD allows the automatic propagation between AOV-graph, AspectualACME, and aSideML models. To validate the proposed approach two case studies, the Health Watcher and the Mobile Media have been used in the MARISA-MDD environment for the automatic generation of AspectualACME and aSideML models, from the AOV-graph model

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Currently there are several aspect-oriented approaches that are related to different stages of software development process. These approaches often lack integration with each other and their models and artifacts are not aligned in a coherent process. The integration of Aspect-Oriented Software development (AOSD) and Model-Driven Development (MDD) enables automatic propagation of models from one phase to another, avoiding loss of important information and decisions established in each. This paper presents a model driven approach, called Marisa-AOCode, which supports the processing of detailed design artifacts to code in different Aspect-Oriented Programming languages. The approach proposed by Maris- AOCode defines transformation rules between aSideML, a modeling language for aspectoriented detailed design, and Metaspin, a generic metamodel for aspect-oriented programming languages. The instantiation of the generic metamodel (Metaspin) provided by the approach of Maris-AOCode is illustrated by the transformation of Metaspin for two languages: AspectLua and CaesarJ. We illustrate the approach with a case study based on the Health Watcher System

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When crosscutting concerns identification is performed from the beginning of development, on the activities involved in requirements engineering, there are many gains in terms of quality, cost and efficiency throughout the lifecycle of software development. This early identification supports the evolution of requirements, detects possible flaws in the requirements specification, improves traceability among requirements, provides better software modularity and prevents possible rework. However, despite these several advantages, the crosscutting concerns identification over requirements engineering faces several difficulties such as the lack of systematization and tools that support it. Furthermore, it is difficult to justify why some concerns are identified as crosscutting or not, since this identification is, most often, made without any methodology that systematizes and bases it. In this context, this paper proposes an approach based on Grounded Theory, called GT4CCI, for systematizing and basing the process of identifying crosscutting concerns in the initial stages of the software development process in the requirements document. Grounded Theory is a renowned methodology for qualitative analysis of data. Through the use of GT4CCI it is possible to better understand, track and document concerns, adding gains in terms of quality, reliability and modularity of the entire lifecycle of software

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The component-based development of systems revolutionized the software development process, facilitating the maintenance, providing more confiability and reuse. Nevertheless, even with all the advantages of the development of components, their composition is an important concern. The verification through informal tests is not enough to achieve a safe composition, because they are not based on formal semantic models with which we are able to describe precisally a system s behaviour. In this context, formal methods provide ways to accurately specify systems through mathematical notations providing, among other benefits, more safety. The formal method CSP enables the specification of concurrent systems and verification of properties intrinsic to them, as well as the refinement among different models. Some approaches apply constraints using CSP, to check the behavior of composition between components, assisting in the verification of those components in advance. Hence, aiming to assist this process, considering that the software market increasingly requires more automation, reducing work and providing agility in business, this work presents a tool that automatizes the verification of composition among components, in which all complexity of formal language is kept hidden from users. Thus, through a simple interface, the tool BST (BRIC-Tool-Suport) helps to create and compose components, predicting, in advance, undesirable behaviors in the system, such as deadlocks

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Uma série de iniciativas para melhoria do processo de software surgiu recentemente visando melhorar a qualidade e a produtividade em organizações de desenvolvimento de software. Alguns modelos e normas têm buscado a implantação de melhorias no processo de desenvolvimento de software, o MPS.BR é um deles. Esse modelo de melhoria de processo é voltado para as micro, pequenas e médias empresas, de forma a atender as suas necessidades de negócio e foi o modelo escolhido para ser explorado nesse trabalho. Várias são as vantagens adquiridas com a implantação de um modelo de melhoria, umas delas é a definição de um processo sistemático de desenvolvimento de software, que auxilie tanto na qualidade e produtividade do processo quanto na qualidade do produto desenvolvido. Com um modelo de processo definido a organização pode contar com diversos benefícios associados à padronização, como, por exemplo, a otimização, a redução de custos com retrabalho, a redução de defeitos nos produtos, dentre outros. Mas não existem modelos prontos que possam ser aplicados diretamente a uma empresa específica de desenvolvimento de software e, por isso, é necessário modelar o processo, customizando-o, com o objetivo final de gerar um modelo que adequadamente represente o processo da organização. Uma das dificuldades para a implantação de modelos como o MPS.BR é a falta de metodologia que mostre como a implantação de melhoria deve ser feita e não apenas o que deve ser feito. Este trabalho propõe uma metodologia para a implementação do modelo MPS.BR baseada no modelo de implantação IDEAL, através de uma ferramenta específica, chamada WebAPSEE. A metodologia foi experimentada no CTIC - Centro de Tecnologia da Informação e Comunicação da UFPA que ao final do trabalho foi avaliado Nível G do MPS.BR.