998 resultados para software evolution
Resumo:
To study the origin and evolution of biochemical pathways in microorganisms, we have developed methods and software for automatic, large-scale reconstructions of phylogenetic relationships. We define the complete set of phylogenetic trees derived from the proteome of an organism as the phylome and introduce the term phylogenetic connection as a concept that describes the relative relationships between taxa in a tree. A query system has been incorporated into the system so as to allow searches for defined categories of trees within the phylome. As a complement, we have developed the pyphy system for visualising the results of complex queries on phylogenetic connections, genomic locations and functional assignments in a graphical format. Our phylogenomics approach, which links phylogenetic information to the flow of biochemical pathways within and among microbial species, has been used to examine more than 8000 phylogenetic trees from seven microbial genomes. The results have revealed a rich web of phylogenetic connections. However, the separation of Bacteria and Archaea into two separate domains remains robust.
Resumo:
El particionado hardware/software es una tarea fundamental en el co-diseño de sistemas embebidos. En ella se decide, teniendo en cuenta las métricas de diseño, qué componentes se ejecutarán en un procesador de propósito general (software) y cuáles en un hardware específico. En los últimos años se han propuesto diversas soluciones al problema del particionado dirigidas por algoritmos metaheurísticos. Sin embargo, debido a la diversidad de modelos y métricas utilizadas, la elección del algoritmo más apropiado sigue siendo un problema abierto. En este trabajo se presenta una comparación de seis algoritmos metaheurísticos: Búsqueda aleatoria (Random search), Búsqueda tabú (Tabu search), Recocido simulado (Simulated annealing), Escalador de colinas estocástico (Stochastic hill climbing), Algoritmo genético (Genetic algorithm) y Estrategia evolutiva (Evolution strategy). El modelo utilizado en la comparación está dirigido a minimizar el área ocupada y el tiempo de ejecución, las restricciones del modelo son consideradas como penalizaciones para incluir en el espacio de búsqueda otras soluciones. Los resultados muestran que los algoritmos Escalador de colinas estocástico y Estrategia evolutiva son los que mejores resultados obtienen en general, seguidos por el Algoritmo genético.
Resumo:
Operational capabilities são caracterizadas como um recurso interno da firma e fonte de vantagem competitiva. Porém, a literatura de estratégia de operações fornece uma definição constitutiva inadequada para as operational capabilities, desconsiderando a relativização dos diferentes contextos, a limitação da base empírica, e não explorando adequadamente a extensa literatura sobre práticas operacionais. Quando as práticas operacionais são operacionalizadas no ambiente interno da firma, elas podem ser incorporadas as rotinas organizacionais, e através do conhecimento tácito da produção se transformar em operational capabilities, criando assim barreiras à imitação. Apesar disso, poucos são os pesquisadores que exploram as práticas operacionais como antecedentes das operational capabilities. Baseado na revisão da literatura, nós investigamos a natureza das operational capabilities; a relação entre práticas operacionais e operational capabilities; os tipos de operational capabilities que são caracterizadas no ambiente interno da firma; e o impacto das operational capabilities no desempenho operacional. Nós conduzimos uma pesquisa de método misto. Na etapa qualitativa, nós conduzimos estudos de casos múltiplos com quatro firmas, duas multinacionais americanas que operam no Brasil, e duas firmas brasileiras. Nós coletamos os dados através de entrevistas semi-estruturadas com questões semi-abertas. Elas foram baseadas na revisão da literatura sobre práticas operacionais e operational capabilities. As entrevistas foram conduzidas pessoalmente. No total 73 entrevistas foram realizadas (21 no primeiro caso, 18 no segundo caso, 18 no terceiro caso, e 16 no quarto caso). Todas as entrevistas foram gravadas e transcritas literalmente. Nós usamos o sotware NVivo. Na etapa quantitativa, nossa amostra foi composta por 206 firmas. O questionário foi criado a partir de uma extensa revisão da literatura e também a partir dos resultados da fase qualitativa. O método Q-sort foi realizado. Um pré-teste foi conduzido com gerentes de produção. Foram realizadas medidas para reduzir Variância de Método Comum. No total dez escalas foram utilizadas. 1) Melhoria Contínua; 2) Gerenciamento da Informação; 3) Aprendizagem; 4) Suporte ao Cliente; 5) Inovação; 6) Eficiência Operacional; 7) Flexibilidade; 8) Customização; 9) Gerenciamento dos Fornecedores; e 10) Desempenho Operacional. Nós usamos análise fatorial confirmatória para confirmar a validade de confiabilidade, conteúdo, convergente, e discriminante. Os dados foram analisados com o uso de regressões múltiplas. Nossos principais resultados foram: Primeiro, a relação das práticas operacionais como antecedentes das operational capabilities. Segundo, a criação de uma tipologia dividida em dois construtos. O primeiro construto foi chamado de Standalone Capabilities. O grupo consiste de zero order capabilities tais como Suporte ao Cliente, Inovação, Eficiência Operacional, Flexibilidade, e Gerenciamento dos Fornecedores. Estas operational capabilities têm por objetivo melhorar os processos da firma. Elas têm uma relação direta com desempenho operacional. O segundo construto foi chamado de Across-the-Board Capabilities. Ele é composto por first order capabilities tais como Aprendizagem Contínua e Gerenciamento da Informação. Estas operational capabilities são consideradas dinâmicas e possuem o papel de reconfigurar as Standalone Capabilities.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
A plethora of process modeling techniques has been proposed over the years. One way of evaluating and comparing the scope and completeness of techniques is by way of representational analysis. The purpose of this paper is to examine how process modeling techniques have developed over the last four decades. The basis of the comparison is the Bunge-Wand-Weber representation model, a benchmark used for the analysis of grammars that purport to model the real world and the interactions within it. This paper presents a comparison of representational analyses of several popular process modeling techniques and has two main outcomes. First, it provides insights, within the boundaries of a representational analysis, into the extent to which process modeling techniques have developed over time. Second, the findings also indicate areas in which the underlying theory seems to be over-engineered or lacking in specialization.
Resumo:
Model transformations are an integral part of model-driven development. Incremental updates are a key execution scenario for transformations in model-based systems, and are especially important for the evolution of such systems. This paper presents a strategy for the incremental maintenance of declarative, rule-based transformation executions. The strategy involves recording dependencies of the transformation execution on information from source models and from the transformation definition. Changes to the source models or the transformation itself can then be directly mapped to their effects on transformation execution, allowing changes to target models to be computed efficiently. This particular approach has many benefits. It supports changes to both source models and transformation definitions, it can be applied to incomplete transformation executions, and a priori knowledge of volatility can be used to further increase the efficiency of change propagation.
Resumo:
In this paper, we present a framework for pattern-based model evolution approaches in the MDA context. In the framework, users define patterns using a pattern modeling language that is designed to describe software design patterns, and they can use the patterns as rules to evolve their model. In the framework, design model evolution takes place via two steps. The first step is a binding process of selecting a pattern and defining where and how to apply the pattern in the model. The second step is an automatic model transformation that actually evolves the model according to the binding information and the pattern rule. The pattern modeling language is defined in terms of a MOF-based role metamodel, and implemented using an existing modeling framework, EMF, and incorporated as a plugin to the Eclipse modeling environment. The model evolution process is also implemented as an Eclipse plugin. With these two plugins, we provide an integrated framework where defining and validating patterns, and model evolution based on patterns can take place in a single modeling environment.
Resumo:
A nature inspired decentralised multi-agent algorithm is proposed to solve a problem of distributed task allocation in which cities produce and store batches of different mail types. Agents must collect and process the mail batches, without global knowledge of their environment or communication between agents. The problem is constrained so that agents are penalised for switching mail types. When an agent process a mail batch of different type to the previous one, it must undergo a change-over, with repeated change-overs rendering the agent inactive. The efficiency (average amount of mail retrieved), and the flexibility (ability of the agents to react to changes in the environment) are investigated both in static and dynamic environments and with respect to sudden changes. New rules for mail selection and specialisation are proposed and are shown to exhibit improved efficiency and flexibility compared to existing ones. We employ a evolutionary algorithm which allows the various rules to evolve and compete. Apart from obtaining optimised parameters for the various rules for any environment, we also observe extinction and speciation.
Resumo:
In the area of Software Engineering, traceability is defined as the capability to track requirements, their evolution and transformation in different components related to engineering process, as well as the management of the relationships between those components. However the current state of the art in traceability does not keep in mind many of the elements that compose a product, specially those created before requirements arise, nor the appropriated use of traceability to manage the knowledge underlying in order to be handled by other organizational or engineering processes. In this work we describe the architecture of a reference model that establishes a set of definitions, processes and models which allow a proper management of traceability and further uses of it, in a wider context than the one related to software development.
Resumo:
This research examines evolving issues in applied computer science and applies economic and business analyses as well. There are two main areas. The first is internetwork communications as embodied by the Internet. The goal of the research is to devise an efficient pricing, prioritization, and incentivization plan that could be realistically implemented on the existing infrastructure. Criteria include practical and economic efficiency, and proper incentives for both users and providers. Background information on the evolution and functional operation of the Internet is given, and relevant literature is surveyed and analyzed. Economic analysis is performed on the incentive implications of the current pricing structure and organization. The problems are identified, and minimally disruptive solutions are proposed for all levels of implementation to the lowest level protocol. Practical issues are considered and performance analyses are done. The second area of research is mass market software engineering, and how this differs from classical software engineering. Software life-cycle revenues are analyzed and software pricing and timing implications are derived. A profit maximizing methodology is developed to select or defer the development of software features for inclusion in a given release. An iterative model of the stages of the software development process is developed, taking into account new communications capabilities as well as profitability. ^
Resumo:
Software Engineering is one of the most widely researched areas of Computer Science. The ability to reuse software, much like reuse of hardware components is one of the key issues in software development. The object-oriented programming methodology is revolutionary in that it promotes software reusability. This thesis describes the development of a tool that helps programmers to design and implement software from within the Smalltalk Environment (an Object- Oriented programming environment). The ASDN tool is part of the PEREAM (Programming Environment for the Reuse and Evolution of Abstract Models) system, which advocates incremental development of software. The Asdn tool along with the PEREAM system seeks to enhance the Smalltalk programming environment by providing facilities for structured development of abstractions (concepts). It produces a document that describes the abstractions that are developed using this tool. The features of the ASDN tool are illustrated by an example.
Resumo:
Software product line engineering promotes large software reuse by developing a system family that shares a set of developed core features, and enables the selection and customization of a set of variabilities that distinguish each software product family from the others. In order to address the time-to-market, the software industry has been using the clone-and-own technique to create and manage new software products or product lines. Despite its advantages, the clone-and-own approach brings several difficulties for the evolution and reconciliation of the software product lines, especially because of the code conflicts generated by the simultaneous evolution of the original software product line, called Source, and its cloned products, called Target. This thesis proposes an approach to evolve and reconcile cloned products based on mining software repositories and code conflict analysis techniques. The approach provides support to the identification of different kinds of code conflicts – lexical, structural and semantics – that can occur during development task integration – bug correction, enhancements and new use cases – from the original evolved software product line to the cloned product line. We have also conducted an empirical study of characterization of the code conflicts produced during the evolution and merging of two large-scale web information system product lines. The results of our study demonstrate the approach potential to automatically or semi-automatically solve several existing code conflicts thus contributing to reduce the complexity and costs of the reconciliation of cloned software product lines.