11 resultados para Artefacts traceability

em Universidade Federal do Rio Grande do Norte(UFRN)


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Software Product Line (SPL) consists of a software development paradigm, whose main focus is to identify features common and variability among applications in a specific domain. An LPS is designed to attend all products requirements from its product family. These requirements and LPS may have changes over time due to several factors, such as evolution of product requirements, evolution of the market, evolution of SLP process, evolution of the technologies used to develop the products. To handle these changes, LPS should be modified and evolve in order to not become obsolete, and adapt itself to new requirements. The Changes Impact Analysis is an activity that understand and identify what consequences these changes are cause on LPS. Impact Analysis on LPS may be supported by traceability relationships, which identify relationships between artefacts created during all phases of software development. Despite the solutions of change impact analysis based on traceability for software, there is a lack of solutions for assessing the change impact analysis based on traceability for LPS, since existing solutions do not include estimates specific to the artefacts of LPS. Thus, this paper proposes a process of change impact analysis and an tool for assessing the change impact through traceability of artefacts in LPS. For this purpose, we specified a process of change impact analysis that considers artifacts produced during the development of LPS. We have also implemented a tool which allows estimating and identifying artefacts and products of LPS affected from changes in other products, changes in class, changes in features, changes between releases of LPS and artefacts related to changes in core assets and variability. Finally, the results were evaluated through metrics

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Over the last decades, the digital inclusion public policies have significantly invested in the purchase of hardwares and softwares in order to offer technology to the Brazilian public teaching institutions, specifically computers and broadband Internet. However, the teachers education to handle these artefacts is put away, even though there is some demand from the information society. With that, this dissertation chooses as an object of study the digital literacy practices performed by 38 (thirty-eight) teachers in initial and continuous education by means of the extension course Literacies and technologies: portuguese language teaching and cyberculture demands. In this direction, we aim at investigating the digital literacy practices of developing teachers in three specific moments: before, while and after this extension action with the intent to (i) delineate the digital literacy practices performed by the collaborators before the formative action; (ii) to narrate the literacy events made possible by the extension course; (iii) to investigate the contributions of the education course to the collaborators teaching practice. We sought theoretical contributions in the literacy studies (BAYNHAM, 1995; KLEIMAN, 1995; HAMILTON; BARTON; IVANIC, 2000), specifically when it comes to digital literacy (COPE, KALANTZIS, 2000; BUZATO, 2001, 2007, 2009; SNYDER, 2002, 2008; LANKSHEAR & KNOBEL, 2002, 2008) and teacher education (PERRENOUD, 2000; SILVA, 2001). Methodologically, this virtual ethnography study (KOZINETS, 1997; HINE, 2000) is inserted into the field of Applied Linguistics and adopts a quali-quantitative research approach (NUNAN, 1992; DÖRNYEI, 2006). The data analysis permitted to evidentiate that (i) before the course, the digital literacy practices focused on the personal and academic dimensions of their realities at the expense of the professional dimension; (ii) during the extension action, the teachers collaboratively took part in the hybrid study sessions, which had a pedagogical focus on the use of ICTs, accomplishing the use of digital literacy practices - unknown before that; (iii) after the course, the attitude of the collaborator teachers concerning the use of ICTs on their regular professional basis had changed, once those teachers started to effectively make use of them, promoting social visibility to what was produced in the school. We also observed that teachers in initial education acted as more experienced peers in collaborative learning process, offering support scaffolding (VYGOTSKY, 1978; BRUNER, 1985) to teachers in continuous education. This occurred because of the undergraduates actualize digital literacy practices were more sophisticated, besides the fact being integrate generation Y (PRENSKY, 2001)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

À partir des études de l'anthropologue Bruno Latour, dans lesquelles on montre l'importance de la rhétorique et des stratégies institutionnelles dans la fabrication des vérités scientifiques; des hypothèses concernant la nature ambiguë des sciences dressées par Isabelle Stengers et des idées d'Edgar Morin sur la nécessité de combattre la pensée fragmenteuse et de relier culture scientifique et culture humanistique, la thèse aborde la relation de l'homme avec ses artefacts, le défi des descriptions des phénomènes et de leurs propriétés, du dialogue entre les humains et les plusieures dimensions de la matière, et de la responsabilité qui devrait venir avec tous les progrès scientifiques. Victor Frankenstein, sa créature, Brown-Séquard et la testostérone synthétique, sont des acteurs qui aident à composer le panorama cognitif de la recherche qui étend les limites de la science et du social au collectif de non-humains et revendique une réforme de la pensée et de l'éducation que les inclut

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Over the last decades, the digital inclusion public policies have significantly invested in the purchase of hardwares and softwares in order to offer technology to the Brazilian public teaching institutions, specifically computers and broadband Internet. However, the teachers education to handle these artefacts is put away, even though there is some demand from the information society. With that, this dissertation chooses as an object of study the digital literacy practices performed by 38 (thirty-eight) teachers in initial and continuous education by means of the extension course Literacies and technologies: portuguese language teaching and cyberculture demands. In this direction, we aim at investigating the digital literacy practices of developing teachers in three specific moments: before, while and after this extension action with the intent to (i) delineate the digital literacy practices performed by the collaborators before the formative action; (ii) to narrate the literacy events made possible by the extension course; (iii) to investigate the contributions of the education course to the collaborators teaching practice. We sought theoretical contributions in the literacy studies (BAYNHAM, 1995; KLEIMAN, 1995; HAMILTON; BARTON; IVANIC, 2000), specifically when it comes to digital literacy (COPE, KALANTZIS, 2000; BUZATO, 2001, 2007, 2009; SNYDER, 2002, 2008; LANKSHEAR & KNOBEL, 2002, 2008) and teacher education (PERRENOUD, 2000; SILVA, 2001). Methodologically, this virtual ethnography study (KOZINETS, 1997; HINE, 2000) is inserted into the field of Applied Linguistics and adopts a quali-quantitative research approach (NUNAN, 1992; DÖRNYEI, 2006). The data analysis permitted to evidentiate that (i) before the course, the digital literacy practices focused on the personal and academic dimensions of their realities at the expense of the professional dimension; (ii) during the extension action, the teachers collaboratively took part in the hybrid study sessions, which had a pedagogical focus on the use of ICTs, accomplishing the use of digital literacy practices - unknown before that; (iii) after the course, the attitude of the collaborator teachers concerning the use of ICTs on their regular professional basis had changed, once those teachers started to effectively make use of them, promoting social visibility to what was produced in the school. We also observed that teachers in initial education acted as more experienced peers in collaborative learning process, offering support scaffolding (VYGOTSKY, 1978; BRUNER, 1985) to teachers in continuous education. This occurred because of the undergraduates actualize digital literacy practices were more sophisticated, besides the fact being integrate generation Y (PRENSKY, 2001)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nowadays, the importance of using software processes is already consolidated and is considered fundamental to the success of software development projects. Large and medium software projects demand the definition and continuous improvement of software processes in order to promote the productive development of high-quality software. Customizing and evolving existing software processes to address the variety of scenarios, technologies, culture and scale is a recurrent challenge required by the software industry. It involves the adaptation of software process models for the reality of their projects. Besides, it must also promote the reuse of past experiences in the definition and development of software processes for the new projects. The adequate management and execution of software processes can bring a better quality and productivity to the produced software systems. This work aimed to explore the use and adaptation of consolidated software product lines techniques to promote the management of the variabilities of software process families. In order to achieve this aim: (i) a systematic literature review is conducted to identify and characterize variability management approaches for software processes; (ii) an annotative approach for the variability management of software process lines is proposed and developed; and finally (iii) empirical studies and a controlled experiment assess and compare the proposed annotative approach against a compositional one. One study a comparative qualitative study analyzed the annotative and compositional approaches from different perspectives, such as: modularity, traceability, error detection, granularity, uniformity, adoption, and systematic variability management. Another study a comparative quantitative study has considered internal attributes of the specification of software process lines, such as modularity, size and complexity. Finally, the last study a controlled experiment evaluated the effort to use and the understandability of the investigated approaches when modeling and evolving specifications of software process lines. The studies bring evidences of several benefits of the annotative approach, and the potential of integration with the compositional approach, to assist the variability management of software process lines

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Researches in Requirements Engineering have been growing in the latest few years. Researchers are concerned with a set of open issues such as: communication between several user profiles involved in software engineering; scope definition; volatility and traceability issues. To cope with these issues a set of works are concentrated in (i) defining processes to collect client s specifications in order to solve scope issues; (ii) defining models to represent requirements to address communication and traceability issues; and (iii) working on mechanisms and processes to be applied to requirements modeling in order to facilitate requirements evolution and maintenance, addressing volatility and traceability issues. We propose an iterative Model-Driven process to solve these issues, based on a double layered CIM to communicate requirements related knowledge to a wider amount of stakeholders. We also present a tool to help requirements engineer through the RE process. Finally we present a case study to illustrate the process and tool s benefits and usage

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The tracking between models of the requirements and architecture activities is a strategy that aims to prevent loss of information, reducing the gap between these two initial activities of the software life cycle. In the context of Software Product Lines (SPL), it is important to have this support, which allows the correspondence between this two activities, with management of variability. In order to address this issue, this paper presents a process of bidirectional mapping, defining transformation rules between elements of a goaloriented requirements model (described in PL-AOVgraph) and elements of an architectural description (defined in PL-AspectualACME). These mapping rules are evaluated using a case study: the GingaForAll LPS. To automate this transformation, we developed the MaRiPLA tool (Mapping Requirements to Product Line Architecture), through MDD techniques (Modeldriven Development), including Atlas Transformation Language (ATL) with specification of Ecore metamodels jointly with Xtext , a DSL definition framework, and Acceleo, a code generation tool, in Eclipse environment. Finally, the generated models are evaluated based on quality attributes such as variability, derivability, reusability, correctness, traceability, completeness, evolvability and maintainability, extracted from the CAFÉ Quality Model

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Software Products Lines (SPL) is a software engineering approach to developing software system families that share common features and differ in other features according to the requested software systems. The adoption of the SPL approach can promote several benefits such as cost reduction, product quality, productivity, and time to market. On the other hand, the SPL approach brings new challenges to the software evolution that must be considered. Recent research work has explored and proposed automated approaches based on code analysis and traceability techniques for change impact analysis in the context of SPL development. There are existing limitations concerning these approaches such as the customization of the analysis functionalities to address different strategies for change impact analysis, and the change impact analysis of fine-grained variability. This dissertation proposes a change impact analysis tool for SPL development, called Squid Impact Analyzer. The tool allows the implementation of change impact analysis based on information from variability modeling, mapping of variability to code assets, and existing dependency relationships between code assets. An assessment of the tool is conducted through an experiment that compare the change impact analysis results provided by the tool with real changes applied to several evolution releases from a SPL for media management in mobile devices

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When crosscutting concerns identification is performed from the beginning of development, on the activities involved in requirements engineering, there are many gains in terms of quality, cost and efficiency throughout the lifecycle of software development. This early identification supports the evolution of requirements, detects possible flaws in the requirements specification, improves traceability among requirements, provides better software modularity and prevents possible rework. However, despite these several advantages, the crosscutting concerns identification over requirements engineering faces several difficulties such as the lack of systematization and tools that support it. Furthermore, it is difficult to justify why some concerns are identified as crosscutting or not, since this identification is, most often, made without any methodology that systematizes and bases it. In this context, this paper proposes an approach based on Grounded Theory, called GT4CCI, for systematizing and basing the process of identifying crosscutting concerns in the initial stages of the software development process in the requirements document. Grounded Theory is a renowned methodology for qualitative analysis of data. Through the use of GT4CCI it is possible to better understand, track and document concerns, adding gains in terms of quality, reliability and modularity of the entire lifecycle of software

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.