903 resultados para Search Based Software Engineering
Resumo:
Pós-graduação em Ciência da Computação - IBILCE
Resumo:
One problem with using component-based software development approach is that once software modules are reused over generations of products, they form legacy structures that can be challenging to understand, making validating these systems difficult. Therefore, tools and methodologies that enable engineers to see interactions of these software modules will enhance their ability to make these software systems more dependable. To address this need, we propose SimSight, a framework to capture dynamic call graphs in Simics, a widely adopted commercial full-system simulator. Simics is a software system that simulates complete computer systems. Thus, it performs nearly identical tasks to a real system but at a much lower speed while providing greater execution observability. We have implemented SimSight to generate dynamic call graphs of statically and dynamically linked functions in x86/Linux environment. A case study illustrates how we can use SimSight to identify sources of software errors. We then evaluate its performance using 12 integer programs from SPEC CPU2006 benchmark suite.
Resumo:
The Distributed Software Development (DSD) is a development strategy that meets the globalization needs concerned with the increase productivity and cost reduction. However, the temporal distance, geographical dispersion and the socio-cultural differences, increased some challenges and, especially, added new requirements related with the communication, coordination and control of projects. Among these new demands there is the necessity of a software process that provides adequate support to the distributed software development. This paper presents an integrated approach of software development and test that considers distributed teams peculiarities. The approach purpose is to offer support to DSD, providing a better project visibility, improving the communication between the development and test teams, minimizing the ambiguity and difficulty to understand the artifacts and activities. This integrated approach was conceived based on four pillars: (i) to identify the DSD peculiarities concerned with development and test processes, (ii) to define the necessary elements to compose the integrated approach of development and test to support the distributed teams, (iii) to describe and specify the workflows, artifacts, and roles of the approach, and (iv) to represent appropriately the approach to enable the effective communication and understanding of it.
Resumo:
XML similarity evaluation has become a central issue in the database and information communities, its applications ranging over document clustering, version control, data integration and ranked retrieval. Various algorithms for comparing hierarchically structured data, XML documents in particular, have been proposed in the literature. Most of them make use of techniques for finding the edit distance between tree structures, XML documents being commonly modeled as Ordered Labeled Trees. Yet, a thorough investigation of current approaches led us to identify several similarity aspects, i.e., sub-tree related structural and semantic similarities, which are not sufficiently addressed while comparing XML documents. In this paper, we provide an integrated and fine-grained comparison framework to deal with both structural and semantic similarities in XML documents (detecting the occurrences and repetitions of structurally and semantically similar sub-trees), and to allow the end-user to adjust the comparison process according to her requirements. Our framework consists of four main modules for (i) discovering the structural commonalities between sub-trees, (ii) identifying sub-tree semantic resemblances, (iii) computing tree-based edit operations costs, and (iv) computing tree edit distance. Experimental results demonstrate higher comparison accuracy with respect to alternative methods, while timing experiments reflect the impact of semantic similarity on overall system performance.
Resumo:
Dimensionality reduction is employed for visual data analysis as a way to obtaining reduced spaces for high dimensional data or to mapping data directly into 2D or 3D spaces. Although techniques have evolved to improve data segregation on reduced or visual spaces, they have limited capabilities for adjusting the results according to user's knowledge. In this paper, we propose a novel approach to handling both dimensionality reduction and visualization of high dimensional data, taking into account user's input. It employs Partial Least Squares (PLS), a statistical tool to perform retrieval of latent spaces focusing on the discriminability of the data. The method employs a training set for building a highly precise model that can then be applied to a much larger data set very effectively. The reduced data set can be exhibited using various existing visualization techniques. The training data is important to code user's knowledge into the loop. However, this work also devises a strategy for calculating PLS reduced spaces when no training data is available. The approach produces increasingly precise visual mappings as the user feeds back his or her knowledge and is capable of working with small and unbalanced training sets.
Resumo:
In this paper we discuss the problem of how to discriminate moments of interest on videos or live broadcast shows. The primary contribution is a system which allows users to personalize their programs with previously created media stickers-pieces of content that may be temporarily attached to the original video. We present the system's architecture and implementation, which offer users operators to transparently annotate videos while watching them. We offered a soccer fan the opportunity to add stickers to the video while watching a live match: the user reported both enjoying and being comfortable using the stickers during the match-relevant results even though the experience was not fully representative.
Resumo:
Although nontechnical losses automatic identification has been massively studied, the problem of selecting the most representative features in order to boost the identification accuracy and to characterize possible illegal consumers has not attracted much attention in this context. In this paper, we focus on this problem by reviewing three evolutionary-based techniques for feature selection, and we also introduce one of them in this context. The results demonstrated that selecting the most representative features can improve a lot of the classification accuracy of possible frauds in datasets composed by industrial and commercial profiles.
Resumo:
Abstract Background Over the last years, a number of researchers have investigated how to improve the reuse of crosscutting concerns. New possibilities have emerged with the advent of aspect-oriented programming, and many frameworks were designed considering the abstractions provided by this new paradigm. We call this type of framework Crosscutting Frameworks (CF), as it usually encapsulates a generic and abstract design of one crosscutting concern. However, most of the proposed CFs employ white-box strategies in their reuse process, requiring two mainly technical skills: (i) knowing syntax details of the programming language employed to build the framework and (ii) being aware of the architectural details of the CF and its internal nomenclature. Also, another problem is that the reuse process can only be initiated as soon as the development process reaches the implementation phase, preventing it from starting earlier. Method In order to solve these problems, we present in this paper a model-based approach for reusing CFs which shields application engineers from technical details, letting him/her concentrate on what the framework really needs from the application under development. To support our approach, two models are proposed: the Reuse Requirements Model (RRM) and the Reuse Model (RM). The former must be used to describe the framework structure and the later is in charge of supporting the reuse process. As soon as the application engineer has filled in the RM, the reuse code can be automatically generated. Results We also present here the result of two comparative experiments using two versions of a Persistence CF: the original one, whose reuse process is based on writing code, and the new one, which is model-based. The first experiment evaluated the productivity during the reuse process, and the second one evaluated the effort of maintaining applications developed with both CF versions. The results show the improvement of 97% in the productivity; however little difference was perceived regarding the effort for maintaining the required application. Conclusion By using the approach herein presented, it was possible to conclude the following: (i) it is possible to automate the instantiation of CFs, and (ii) the productivity of developers are improved as long as they use a model-based instantiation approach.
Resumo:
Traditional software engineering approaches and metaphors fall short when applied to areas of growing relevance such as electronic commerce, enterprise resource planning, and mobile computing: such areas, in fact, generally call for open architectures that may evolve dynamically over time so as to accommodate new components and meet new requirements. This is probably one of the main reasons that the agent metaphor and the agent-oriented paradigm are gaining momentum in these areas. This thesis deals with the engineering of complex software systems in terms of the agent paradigm. This paradigm is based on the notions of agent and systems of interacting agents as fundamental abstractions for designing, developing and managing at runtime typically distributed software systems. However, today the engineer often works with technologies that do not support the abstractions used in the design of the systems. For this reason the research on methodologies becomes the basic point in the scientific activity. Currently most agent-oriented methodologies are supported by small teams of academic researchers, and as a result, most of them are in an early stage and still in the first context of mostly \academic" approaches for agent-oriented systems development. Moreover, such methodologies are not well documented and very often defined and presented only by focusing on specific aspects of the methodology. The role played by meta- models becomes fundamental for comparing and evaluating the methodologies. In fact a meta-model specifies the concepts, rules and relationships used to define methodologies. Although it is possible to describe a methodology without an explicit meta-model, formalising the underpinning ideas of the methodology in question is valuable when checking its consistency or planning extensions or modifications. A good meta-model must address all the different aspects of a methodology, i.e. the process to be followed, the work products to be generated and those responsible for making all this happen. In turn, specifying the work products that must be developed implies dening the basic modelling building blocks from which they are built. As a building block, the agent abstraction alone is not enough to fully model all the aspects related to multi-agent systems in a natural way. In particular, different perspectives exist on the role that environment plays within agent systems: however, it is clear at least that all non-agent elements of a multi-agent system are typically considered to be part of the multi-agent system environment. The key role of environment as a first-class abstraction in the engineering of multi-agent system is today generally acknowledged in the multi-agent system community, so environment should be explicitly accounted for in the engineering of multi-agent system, working as a new design dimension for agent-oriented methodologies. At least two main ingredients shape the environment: environment abstractions - entities of the environment encapsulating some functions -, and topology abstractions - entities of environment that represent the (either logical or physical) spatial structure. In addition, the engineering of non-trivial multi-agent systems requires principles and mechanisms for supporting the management of the system representation complexity. These principles lead to the adoption of a multi-layered description, which could be used by designers to provide different levels of abstraction over multi-agent systems. The research in these fields has lead to the formulation of a new version of the SODA methodology where environment abstractions and layering principles are exploited for en- gineering multi-agent systems.
Resumo:
Actual trends in software development are pushing the need to face a multiplicity of diverse activities and interaction styles characterizing complex and distributed application domains, in such a way that the resulting dynamics exhibits some grade of order, i.e. in terms of evolution of the system and desired equilibrium. Autonomous agents and Multiagent Systems are argued in literature as one of the most immediate approaches for describing such a kind of challenges. Actually, agent research seems to converge towards the definition of renewed abstraction tools aimed at better capturing the new demands of open systems. Besides agents, which are assumed as autonomous entities purposing a series of design objectives, Multiagent Systems account new notions as first-class entities, aimed, above all, at modeling institutional/organizational entities, placed for normative regulation, interaction and teamwork management, as well as environmental entities, placed as resources to further support and regulate agent work. The starting point of this thesis is recognizing that both organizations and environments can be rooted in a unifying perspective. Whereas recent research in agent systems seems to account a set of diverse approaches to specifically face with at least one aspect within the above mentioned, this work aims at proposing a unifying approach where both agents and their organizations can be straightforwardly situated in properly designed working environments. In this line, this work pursues reconciliation of environments with sociality, social interaction with environment based interaction, environmental resources with organizational functionalities with the aim to smoothly integrate the various aspects of complex and situated organizations in a coherent programming approach. Rooted in Agents and Artifacts (A&A) meta-model, which has been recently introduced both in the context of agent oriented software engineering and programming, the thesis promotes the notion of Embodied Organizations, characterized by computational infrastructures attaining a seamless integration between agents, organizations and environmental entities.
Resumo:
The article introduces the E-learning Circle, a tool developed to assure the quality of the software design process of e-learning systems, considering pedagogical principles as well as technology. The E-learning Circle consists of a number of concentric circles which are divided into three sectors. The content of the inner circles is based on pedagogical principles, while the outer circle specifies how the pedagogical principles may be implemented with technology. The circle’s centre is dedicated to the subject taught, ensuring focus on the specific subject’s properties. The three sectors represent the student, the teacher and the learning objectives. The strengths of the E-learning Circle are the compact presentation combined with the overview it provides, as well as the usefulness of a design tool dealing with complexity, providing a common language and embedding best practice. The E-learning Circle is not a prescriptive method, but is useful in several design models and processes. The article presents two projects where the E-learning Circle was used as a design tool.
Resumo:
Semantic technologies have become widely adopted in recent years, and choosing the right technologies for the problems that users face is often a difficult task. This paper presents an application of the Analytic Network Process for the recommendation of semantic technologies, which is based on a quality model for semantic technologies. Instead of relying on expert-based comparisons of alternatives, the comparisons in our framework depend on real evaluation results. Furthermore, the recommendations in our framework derive from user quality requirements, which leads to better recommendations tailored to users’ needs. This paper also presents an algorithm for pairwise comparisons, which is based on user quality requirements and evaluation results.
Resumo:
Software testing is a key aspect of software reliability and quality assurance in a context where software development constantly has to overcome mammoth challenges in a continuously changing environment. One of the characteristics of software testing is that it has a large intellectual capital component and can thus benefit from the use of the experience gained from past projects. Software testing can, then, potentially benefit from solutions provided by the knowledge management discipline. There are in fact a number of proposals concerning effective knowledge management related to several software engineering processes. Objective: We defend the use of a lesson learned system for software testing. The reason is that such a system is an effective knowledge management resource enabling testers and managers to take advantage of the experience locked away in the brains of the testers. To do this, the experience has to be gathered, disseminated and reused. Method: After analyzing the proposals for managing software testing experience, significant weaknesses have been detected in the current systems of this type. The architectural model proposed here for lesson learned systems is designed to try to avoid these weaknesses. This model (i) defines the structure of the software testing lessons learned; (ii) sets up procedures for lesson learned management; and (iii) supports the design of software tools to manage the lessons learned. Results: A different approach, based on the management of the lessons learned that software testing engineers gather from everyday experience, with two basic goals: usefulness and applicability. Conclusion: The architectural model proposed here lays the groundwork to overcome the obstacles to sharing and reusing experience gained in the software testing and test management. As such, it provides guidance for developing software testing lesson learned systems.
Resumo:
The focus of this paper is to outline the main structure of an alternative software process improvement method for small- and medium-size enterprises. This method is based on the action package concept, which helps to institutionalize the effective practices with affordable implementation costs. This paper also presents the results and lessons learned when this method was applied to three enterprises in the requirements engineering domain.