990 resultados para Software-related inventions
Resumo:
This thesis describes an approach to overcoming the complexity of software product management (SPM) and consists of several studies that investigate the activities and roles in product management, as well as issues related to the adoption of software product management. The thesis focuses on organizations that have started the adoption of SPM but faced difficulties due to its complexity and fuzziness and suggests the frameworks for overcoming these challenges using the principles of decomposition and iterative improvements. The research process consisted of three phases, each of which provided complementary results and empirical observation to the problem of overcoming the complexity of SPM. Overall, product management processes and practices in 13 companies were studied and analysed. Moreover, additional data was collected with a survey conducted worldwide. The collected data were analysed using the grounded theory (GT) to identify the possible ways to overcome the complexity of SPM. Complementary research methods, like elements of the Theory of Constraints were used for deeper data analysis. The results of the thesis indicate that the decomposition of SPM activities depending on the specific characteristics of companies and roles is a useful approach for simplifying the existing SPM frameworks. Companies would benefit from the results by adopting SPM activities more efficiently and effectively and spending fewer resources on its adoption by concentrating on the most important SPM activities.
Resumo:
Adapting and scaling up agile concepts, which are characterized by iterative, self-directed, customer value focused methods, may not be a simple endeavor. This thesis concentrates on studying challenges in a large-scale agile software development transformation in order to enhance understanding and bring insight into the underlying factors for such emerging challenges. This topic is approached through understanding the concepts of agility and different methods compared to traditional plan-driven processes, complex adaptive theory and the impact of organizational culture on agile transformational efforts. The empirical part was conducted by a qualitative case study approach. The internationally operating software development case organization had a year of experience of an agile transformation effort during it had also undergone organizational realignment efforts. The primary data collection was conducted through semi-structured interviews supported by participatory observation. As a result the identified challenges were categorized under four broad themes: organizational, management, team dynamics and process related. The identified challenges indicate that agility is a multifaceted concept. Agile practices may bring visibility in issues of which many are embedded in the organizational culture or in the management style. Viewing software development as a complex adaptive system could facilitate understanding of the underpinning philosophy and eventually solving the issues: interactions are more important than processes and solving a complex problem, such a novel software development, requires constant feedback and adaptation to changing requirements. Furthermore, an agile implementation seems to be unique in nature, and agents engaged in the interaction are the pivotal part of the success of achieving agility. In case agility is not a strategic choice for whole organization, it seems additional issues may arise due to different ways of working in different parts of an organization. Lastly, detailed suggestions to mitigate the challenges of the case organization are provided.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
This master’s thesis was done for a small company, Vipetec Oy, which offers specialized technological services for companies mainly in forest industry. The study was initiated partly because the company wants to expand its customer base to a new industry. There were two goals connected to each other. First was to find out how much and what kind of value current customers have realized from ATA Process Event Library, one of the products that the company offers. Second was to determine the best way to present this value and its implications for future value potential to both current and potential customers. ATA helps to make grade and product changes, starting after machine downtime, and recovery from production break faster for customers. All three events sometimes occur in production line. The faster operation results to savings in time and material. In addition to ATA Vipetec also offers other services related to development of automation and optimization of controls. Theoretical part concentrates on the concept of value, how it can be delivered to customers, and what kind of risk customer faces in industrial purchasing. Also the function of reference marketing towards customers is discussed. In the empirical part the realized value for existing customers is evaluated based on both numerical data and interviews. There’s also a brief case study about one customer. After that the value-based reference marketing for a target industry is examined through interviews of these potential customers. Finally answers to the research questions are stated and compared also to the theoretical knowledge about the subject. Results show that those customers’ machines which use the full service concept of ATA usually are able to save more time and material than the machines which use only some features of the product. Interviews indicated that sales arguments which focus on improved competitive status are not as effective as current arguments which focus on numerical improvements. In the case of potential customers in the new industry, current sales arguments likely work best for those whose irregular production situations are caused mainly by fault situations. When the actions of Vipetec were compared to ten key elements of creating customer references, it was seen that many of them the company has either already included in its strategy or has good chances to include them with the help of the results of this study.
Resumo:
ICT contributed to about 0.83 GtCO2 emissions where the 37% comes from the telecoms infrastructures. At the same time, the increasing cost of energy has been hindering the industry in providing more affordable services for the users. One of the sources of these problems is said to be the rigidity of the current network infrastructures which limits innovations in the network. SDN (Software Defined Network) has emerged as one of the prominent solutions with its idea of abstraction, visibility, and programmability in the network. Nevertheless, there are still significant efforts needed to actually utilize it to create a more energy and environmentally friendly network. In this paper, we suggested and developed a platform for developing ecology-related SDN applications. The main approach we take in realizing this goal is by maximizing the abstractions provided by OpenFlow and to expose RESTful interfaces to modules which enable energy saving in the network. While OpenFlow is made to be the standard for SDN protocol, there are still some mechanisms not defined in its specification such as settings related to Quality of Service (QoS). To solve this, we created REST interfaces for setting of QoS in the switches which can maximize network utilization. We also created a module for minimizing the required network resources in delivering packets across the network. This is achieved by utilizing redundant links when it is needed, but disabling them when the load in the network decreases. The usage of multi paths in a network is also evaluated for its benefit in terms of transfer rate improvement and energy savings. Hopefully, the developed framework can be beneficial for developers in creating applications for supporting environmentally friendly network infrastructures.
Resumo:
Software is a key component in many of our devices and products that we use every day. Most customers demand not only that their devices should function as expected but also that the software should be of high quality, reliable, fault tolerant, efficient, etc. In short, it is not enough that a calculator gives the correct result of a calculation, we want the result instantly, in the right form, with minimal use of battery, etc. One of the key aspects for succeeding in today's industry is delivering high quality. In most software development projects, high-quality software is achieved by rigorous testing and good quality assurance practices. However, today, customers are asking for these high quality software products at an ever-increasing pace. This leaves the companies with less time for development. Software testing is an expensive activity, because it requires much manual work. Testing, debugging, and verification are estimated to consume 50 to 75 per cent of the total development cost of complex software projects. Further, the most expensive software defects are those which have to be fixed after the product is released. One of the main challenges in software development is reducing the associated cost and time of software testing without sacrificing the quality of the developed software. It is often not enough to only demonstrate that a piece of software is functioning correctly. Usually, many other aspects of the software, such as performance, security, scalability, usability, etc., need also to be verified. Testing these aspects of the software is traditionally referred to as nonfunctional testing. One of the major challenges with non-functional testing is that it is usually carried out at the end of the software development process when most of the functionality is implemented. This is due to the fact that non-functional aspects, such as performance or security, apply to the software as a whole. In this thesis, we study the use of model-based testing. We present approaches to automatically generate tests from behavioral models for solving some of these challenges. We show that model-based testing is not only applicable to functional testing but also to non-functional testing. In its simplest form, performance testing is performed by executing multiple test sequences at once while observing the software in terms of responsiveness and stability, rather than the output. The main contribution of the thesis is a coherent model-based testing approach for testing functional and performance related issues in software systems. We show how we go from system models, expressed in the Unified Modeling Language, to test cases and back to models again. The system requirements are traced throughout the entire testing process. Requirements traceability facilitates finding faults in the design and implementation of the software. In the research field of model-based testing, many new proposed approaches suffer from poor or the lack of tool support. Therefore, the second contribution of this thesis is proper tool support for the proposed approach that is integrated with leading industry tools. We o er independent tools, tools that are integrated with other industry leading tools, and complete tool-chains when necessary. Many model-based testing approaches proposed by the research community suffer from poor empirical validation in an industrial context. In order to demonstrate the applicability of our proposed approach, we apply our research to several systems, including industrial ones.
Resumo:
Abstract Software product metrics aim at measuring the quality of software. Modu- larity is an essential factor in software quality. In this work, metrics related to modularity and especially cohesion of the modules, are considered. The existing metrics are evaluated, and several new alternatives are proposed. The idea of cohesion of modules is that a module or a class should consist of related parts. The closely related principle of coupling says that the relationships between modules should be minimized. First, internal cohesion metrics are considered. The relations that are internal to classes are shown to be useless for quality measurement. Second, we consider external relationships for cohesion. A detailed analysis using design patterns and refactorings confirms that external cohesion is a better quality indicator than internal. Third, motivated by the successes (and problems) of external cohesion metrics, another kind of metric is proposed that represents the quality of modularity of software. This metric can be applied to refactorings related to classes, resulting in a refactoring suggestion system. To describe the metrics formally, a notation for programs is developed. Because of the recursive nature of programming languages, the properties of programs are most compactly represented using grammars and formal lan- guages. Also the tools that were used for metrics calculation are described.
Resumo:
This work investigates mathematical details and computational aspects of Metropolis-Hastings reptation quantum Monte Carlo and its variants, in addition to the Bounce method and its variants. The issues that concern us include the sensitivity of these algorithms' target densities to the position of the trial electron density along the reptile, time-reversal symmetry of the propagators, and the length of the reptile. We calculate the ground-state energy and one-electron properties of LiH at its equilibrium geometry for all these algorithms. The importance sampling is performed with a single-determinant large Slater-type orbitals (STO) basis set. The computer codes were written to exploit the efficiencies engineered into modern, high-performance computing software. Using the Bounce method in the calculation of non-energy-related properties, those represented by operators that do not commute with the Hamiltonian, is a novel work. We found that the unmodified Bounce gives good ground state energy and very good one-electron properties. We attribute this to its favourable time-reversal symmetry in its target density's Green's functions. Breaking this symmetry gives poorer results. Use of a short reptile in the Bounce method does not alter the quality of the results. This suggests that in future applications one can use a shorter reptile to cut down the computational time dramatically.
Resumo:
Les logiciels sont en constante évolution, nécessitant une maintenance et un développement continus. Ils subissent des changements tout au long de leur vie, que ce soit pendant l'ajout de nouvelles fonctionnalités ou la correction de bogues dans le code. Lorsque ces logiciels évoluent, leurs architectures ont tendance à se dégrader avec le temps et deviennent moins adaptables aux nouvelles spécifications des utilisateurs. Elles deviennent plus complexes et plus difficiles à maintenir. Dans certains cas, les développeurs préfèrent refaire la conception de ces architectures à partir du zéro plutôt que de prolonger la durée de leurs vies, ce qui engendre une augmentation importante des coûts de développement et de maintenance. Par conséquent, les développeurs doivent comprendre les facteurs qui conduisent à la dégradation des architectures, pour prendre des mesures proactives qui facilitent les futurs changements et ralentissent leur dégradation. La dégradation des architectures se produit lorsque des développeurs qui ne comprennent pas la conception originale du logiciel apportent des changements au logiciel. D'une part, faire des changements sans comprendre leurs impacts peut conduire à l'introduction de bogues et à la retraite prématurée du logiciel. D'autre part, les développeurs qui manquent de connaissances et–ou d'expérience dans la résolution d'un problème de conception peuvent introduire des défauts de conception. Ces défauts ont pour conséquence de rendre les logiciels plus difficiles à maintenir et évoluer. Par conséquent, les développeurs ont besoin de mécanismes pour comprendre l'impact d'un changement sur le reste du logiciel et d'outils pour détecter les défauts de conception afin de les corriger. Dans le cadre de cette thèse, nous proposons trois principales contributions. La première contribution concerne l'évaluation de la dégradation des architectures logicielles. Cette évaluation consiste à utiliser une technique d’appariement de diagrammes, tels que les diagrammes de classes, pour identifier les changements structurels entre plusieurs versions d'une architecture logicielle. Cette étape nécessite l'identification des renommages de classes. Par conséquent, la première étape de notre approche consiste à identifier les renommages de classes durant l'évolution de l'architecture logicielle. Ensuite, la deuxième étape consiste à faire l'appariement de plusieurs versions d'une architecture pour identifier ses parties stables et celles qui sont en dégradation. Nous proposons des algorithmes de bit-vecteur et de clustering pour analyser la correspondance entre plusieurs versions d'une architecture. La troisième étape consiste à mesurer la dégradation de l'architecture durant l'évolution du logiciel. Nous proposons un ensemble de m´etriques sur les parties stables du logiciel, pour évaluer cette dégradation. La deuxième contribution est liée à l'analyse de l'impact des changements dans un logiciel. Dans ce contexte, nous présentons une nouvelle métaphore inspirée de la séismologie pour identifier l'impact des changements. Notre approche considère un changement à une classe comme un tremblement de terre qui se propage dans le logiciel à travers une longue chaîne de classes intermédiaires. Notre approche combine l'analyse de dépendances structurelles des classes et l'analyse de leur historique (les relations de co-changement) afin de mesurer l'ampleur de la propagation du changement dans le logiciel, i.e., comment un changement se propage à partir de la classe modifiée è d'autres classes du logiciel. La troisième contribution concerne la détection des défauts de conception. Nous proposons une métaphore inspirée du système immunitaire naturel. Comme toute créature vivante, la conception de systèmes est exposée aux maladies, qui sont des défauts de conception. Les approches de détection sont des mécanismes de défense pour les conception des systèmes. Un système immunitaire naturel peut détecter des pathogènes similaires avec une bonne précision. Cette bonne précision a inspiré une famille d'algorithmes de classification, appelés systèmes immunitaires artificiels (AIS), que nous utilisions pour détecter les défauts de conception. Les différentes contributions ont été évaluées sur des logiciels libres orientés objets et les résultats obtenus nous permettent de formuler les conclusions suivantes: • Les métriques Tunnel Triplets Metric (TTM) et Common Triplets Metric (CTM), fournissent aux développeurs de bons indices sur la dégradation de l'architecture. La d´ecroissance de TTM indique que la conception originale de l'architecture s’est dégradée. La stabilité de TTM indique la stabilité de la conception originale, ce qui signifie que le système est adapté aux nouvelles spécifications des utilisateurs. • La séismologie est une métaphore intéressante pour l'analyse de l'impact des changements. En effet, les changements se propagent dans les systèmes comme les tremblements de terre. L'impact d'un changement est plus important autour de la classe qui change et diminue progressivement avec la distance à cette classe. Notre approche aide les développeurs à identifier l'impact d'un changement. • Le système immunitaire est une métaphore intéressante pour la détection des défauts de conception. Les résultats des expériences ont montré que la précision et le rappel de notre approche sont comparables ou supérieurs à ceux des approches existantes.
Resumo:
Kia Motors Corporation (KMC) tiene como objetivo desde hace algunos años, la creación e implementación de una solución de negocios enfocada en una gestión empresarial estandarizada a todos los distribuidores de Kia a nivel latinoamericano: Colombia, Perú, Ecuador y Chile. El proceso actual con el que cuentan los distribuidores en América Latina con sus concesionarios es enviar toda la información relacionada con los estatutos financieros a través de correo electrónico junto con una base de datos física, la cual se va archivando. El proceso es manual siendo de mucha dedicación y tiempo requerido para cumplir con las funciones pedidas. El enfoque actual de este proceso es claro: analizar el desempeño y rendimiento de cada uno de los concesionarios de la red junto con la identificación de oportunidades para mejorar. KMC junto con todos sus distribuidores están interesados en buscar un sistema de gestión empresarial sencillo, adecuado y de fácil manejo que permitirá únicamente a todos los concesionarios presentar sus estados de cuenta y desarrollo de una manera estandarizada a su distribuidor directamente. Entonces, el sistema deseado debe ser capaz de generar resultados basándose en lo comunicado por los distribuidores y proporcionar un conjunto de características bajo una adecuada funcionalidad para permitir a todos los usuarios de la red (concesionarios, distribuidores y KMC) analizar el rendimiento y desempeño de la empresa e identificar las áreas que requieren una mejora. En el siguiente documento, podremos ver el desarrollo que ha tenido METROKIA S.A para la creación y aplicación de una herramienta tecnológica (software) enfocada en lo mencionado anteriormente. Ha sido un proceso de varias etapas en donde tanto las variables como los indicadores de desempeño han tenido correcciones con el fin de poder ser leídos y entendidos fácilmente por toda la organización y red de concesionarios afiliados.
Resumo:
The SPE taxonomy of evolving software systems, first proposed by Lehman in 1980, is re-examined in this work. The primary concepts of software evolution are related to generic theories of evolution, particularly Dawkins' concept of a replicator, to the hermeneutic tradition in philosophy and to Kuhn's concept of paradigm. These concepts provide the foundations that are needed for understanding the phenomenon of software evolution and for refining the definitions of the SPE categories. In particular, this work argues that a software system should be defined as of type P if its controlling stakeholders have made a strategic decision that the system must comply with a single paradigm in its representation of domain knowledge. The proposed refinement of SPE is expected to provide a more productive basis for developing testable hypotheses and models about possible differences in the evolution of E- and P-type systems than is provided by the original scheme. Copyright (C) 2005 John Wiley & Sons, Ltd.
Resumo:
Running hydrodynamic models interactively allows both visual exploration and change of model state during simulation. One of the main characteristics of an interactive model is that it should provide immediate feedback to the user, for example respond to changes in model state or view settings. For this reason, such features are usually only available for models with a relatively small number of computational cells, which are used mainly for demonstration and educational purposes. It would be useful if interactive modeling would also work for models typically used in consultancy projects involving large scale simulations. This results in a number of technical challenges related to the combination of the model itself and the visualisation tools (scalability, implementation of an appropriate API for control and access to the internal state). While model parallelisation is increasingly addressed by the environmental modeling community, little effort has been spent on developing a high-performance interactive environment. What can we learn from other high-end visualisation domains such as 3D animation, gaming, virtual globes (Autodesk 3ds Max, Second Life, Google Earth) that also focus on efficient interaction with 3D environments? In these domains high efficiency is usually achieved by the use of computer graphics algorithms such as surface simplification depending on current view, distance to objects, and efficient caching of the aggregated representation of object meshes. We investigate how these algorithms can be re-used in the context of interactive hydrodynamic modeling without significant changes to the model code and allowing model operation on both multi-core CPU personal computers and high-performance computer clusters.
Resumo:
This paper provides an examination of the emergence of open business models — entrepreneurial strategies that take advantage of the ease of digital reproduction to distribute free content, while earning money from the sale of related products and services. Locating the origins of open business in the open source software phenomenon, the authors suggest that the business strategies innovated there have broader economic relevance. Through a case study of the tecnobrega music scene in Belém, the paper illustrates how open business models can be applied to the production of cultural materials more generally
Resumo:
Há mais de 30 anos o Brasil tem desenvolvido políticas específicas para o setor de informática, desde a Política Nacional de Informática da década de 70, passando pelo Período de Reserva de Mercado dos anos 80 e, nos dias de hoje, em que as Tecnologias de Informação e Comunicação (TIC) são tidas como uma das áreas prioritárias na Política Industrial. Dentre as metas atuais, destaca-se o foco na ampliação do volume de exportações de software e serviços. Contudo, apesar dessas pretensões, o país não tem tido destaque internacional expressivo para o setor. Por outro lado, a Índia, também considerada como um país emergente, figurando na lista dos BRIC, foi responsável pela exportação de cerca de US$47 bilhões em software e serviços de Tecnologia da Informação (TI) em 2009, se destacando como um país protagonista no mercado internacional do setor. A implementação de uma indústria tecnicamente sofisticada como a do software, que exige um ambiente propício à inovação, em um país em desenvolvimento como a Índia chama a atenção. De certo existiram arranjos jurídico-institucionais que foram utilizados naquele país. Quais? Em que medida tais arranjos ajudaram no desenvolvimento indiano do setor? E no Brasil? Este trabalho parte da hipótese de que o ambiente jurídico-institucional desses países definiu fluxos de conhecimento distintos, influenciando o tipo de desenvolvimento do setor de software de cada um. Averiguar como, entre outros fatores sócio-econômicos, esses arranjos jurídico-institucionais influenciaram na conformação diversa de fluxos de conhecimento é o objetivo específico desta pesquisa. Entende-se aqui como ambiente jurídico-institucional todas as regulamentações que estabelecem instituições, diretrizes e condições comuns para determinado tema. Partindo do pressuposto de que o setor de software desenvolve atividades intensivas em conhecimento, para cada país em questão, serão analisados apenas arranjos jurídico-institucionais que tiveram, ou têm, poder de delimitar o fluxo de conhecimento referente ao setor, sejam eles provenientes de políticas comerciais (de exportação e importação, ou de propriedade intelectual) ou de políticas de investimento para inovação. A questão fundamental ultrapassa o debate se o Estado deve ou não intervir, para focar-se na análise sobre os diferentes tipos de envolvimento observados e quais os seus efeitos. Para tal, além de revisão bibliográfica, foi feita uma pesquisa de campo na Índia (Delhi, Mumbai, Bangalore) e no Brasil (São Paulo, Brasília e Rio de Janeiro), onde foram conduzidas entrevistas com empresas e associações de software, gestores públicos e acadêmicos que estudam o setor.