975 resultados para CNPQ::CIENCIAS EXATAS E DA TERRA::ASTRONOMIA
Resumo:
The activity of requirements engineering is seen in agile methods as bureaucratic activity making the process less agile. However, the lack of documentation in agile development environment is identified as one of the main challenges of the methodology. Thus, it is observed that there is a contradiction between what agile methodology claims and the result, which occurs in the real environment. For example, in agile methods the user stories are widely used to describe requirements. However, this way of describing requirements is still not enough, because the user stories is an artifact too narrow to represent and detail the requirements. The activities of verifying issues like software context and dependencies between stories are also limited with the use of only this artifact. In the context of requirements engineering there are goal oriented approaches that bring benefits to the requirements documentation, including, completeness of requirements, analysis of alternatives and support to the rationalization of requirements. Among these approaches, it excels the i * modeling technique that provides a graphical view of the actors involved in the system and their dependencies. This work is in the context of proposing an additional resource that aims to reduce this lack of existing documentation in agile methods. Therefore, the objective of this work is to provide a graphical view of the software requirements and their relationships through i * models, thus enriching the requirements in agile methods. In order to do so, we propose a set of heuristics to perform the mapping of the requirements presented as user stories in i * models. These models can be used as a form of documentation in agile environment, because by mapping to i * models, the requirements will be viewed more broadly and with their proper relationships according to the business environment that they will meet
Resumo:
This work presents a algorithmic study of Multicast Packing Problem considering a multiobjective approach. The first step realized was an extensive review about the problem. This review serverd as a reference point for the definition of the multiobjective mathematical model. Then, the instances used in the experimentation process were defined, this instances were created based on the main caracteristics from literature. Since both mathematical model and the instances were definined, then several algoritms were created. The algorithms were based on the classical approaches to multiobjective optimization: NSGA2 (3 versions), SPEA2 (3 versions). In addition, the GRASP procedures were adapted to work with multiples objectives, two vesions were created. These algorithms were composed by three recombination operators(C1, C2 e C3), two operator for build solution, a mutation operator and a local search procedure. Finally, a long experimentation process was performed. This process has three stages: the first consisted of adjusting the parameters; the second was perfomed to indentify the best version for each algorithm. After, the best versions for each algorithm were compared in order to identify the best algorithm among all. The algorithms were evaluated based on quality indicators and Hypervolume Multiplicative Epsilon
Resumo:
When crosscutting concerns identification is performed from the beginning of development, on the activities involved in requirements engineering, there are many gains in terms of quality, cost and efficiency throughout the lifecycle of software development. This early identification supports the evolution of requirements, detects possible flaws in the requirements specification, improves traceability among requirements, provides better software modularity and prevents possible rework. However, despite these several advantages, the crosscutting concerns identification over requirements engineering faces several difficulties such as the lack of systematization and tools that support it. Furthermore, it is difficult to justify why some concerns are identified as crosscutting or not, since this identification is, most often, made without any methodology that systematizes and bases it. In this context, this paper proposes an approach based on Grounded Theory, called GT4CCI, for systematizing and basing the process of identifying crosscutting concerns in the initial stages of the software development process in the requirements document. Grounded Theory is a renowned methodology for qualitative analysis of data. Through the use of GT4CCI it is possible to better understand, track and document concerns, adding gains in terms of quality, reliability and modularity of the entire lifecycle of software
Resumo:
Web services are software accessible via the Internet that provide functionality to be used by applications. Today, it is natural to reuse third-party services to compose new services. This process of composition can occur in two styles, called orchestration and choreography. A choreography represents a collaboration between services which know their partners in the composition, to achieve the service s desired functionality. On the other hand, an orchestration have a central process (the orchestrator) that coordinates all application operations. Our work is placed in this latter context, by proposing an abstract model for running service orchestrations. For this purpose, a graph reduction machine will be defined for the implementation of service orchestrations specified in a variant of the PEWS composition language. Moreover, a prototype of this machine (in Java) is built as a proof of concept
Resumo:
Nonogram is a logical puzzle whose associated decision problem is NP-complete. It has applications in pattern recognition problems and data compression, among others. The puzzle consists in determining an assignment of colors to pixels distributed in a N M matrix that satisfies line and column constraints. A Nonogram is encoded by a vector whose elements specify the number of pixels in each row and column of a figure without specifying their coordinates. This work presents exact and heuristic approaches to solve Nonograms. The depth first search was one of the chosen exact approaches because it is a typical example of brute search algorithm that is easy to implement. Another implemented exact approach was based on the Las Vegas algorithm, so that we intend to investigate whether the randomness introduce by the Las Vegas-based algorithm would be an advantage over the depth first search. The Nonogram is also transformed into a Constraint Satisfaction Problem. Three heuristics approaches are proposed: a Tabu Search and two memetic algorithms. A new function to calculate the objective function is proposed. The approaches are applied on 234 instances, the size of the instances ranging from 5 x 5 to 100 x 100 size, and including logical and random Nonograms
Resumo:
The widespread growth in the use of smart cards (by banks, transport services, and cell phones, etc) has brought an important fact that must be addressed: the need of tools that can be used to verify such cards, so to guarantee the correctness of their software. As the vast majority of cards that are being developed nowadays use the JavaCard technology as they software layer, the use of the Java Modeling Language (JML) to specify their programs appear as a natural solution. JML is a formal language tailored to Java. It has been inspired by methodologies from Larch and Eiffel, and has been widely adopted as the de facto language when dealing with specification of any Java related program. Various tools that make use of JML have already been developed, covering a wide range of functionalities, such as run time and static checking. But the tools existent so far for static checking are not fully automated, and, those that are, do not offer an adequate level of soundness and completeness. Our objective is to contribute to a series of techniques, that can be used to accomplish a fully automated and confident verification of JavaCard applets. In this work we present the first steps to this. With the use of a software platform comprised by Krakatoa, Why and haRVey, we developed a set of techniques to reduce the size of the theory necessary to verify the specifications. Such techniques have yielded very good results, with gains of almost 100% in all tested cases, and has proved as a valuable technique to be used, not only in this, but in most real world problems related to automatic verification
Resumo:
The way to deal with information assets means nowadays the main factor not only for the success but also for keeping the companies in the global world. The number of information security incidents has grown for the last years. The establishment of information security policies that search to keep the security requirements of assets in the desired degrees is the major priority for the companies. This dissertation suggests a unified process for elaboration, maintenance and development of information security policies, the Processo Unificado para Políticas de Segurança da Informação - PUPSI. The elaboration of this proposal started with the construction of a structure of knowledge based on documents and official rules, published in the last two decades, about security policies and information security. It's a model based on the examined documents which defines the needed security policies to be established in the organization, its work flow and identifies the sequence of hierarchy among them. It's also made a model of the entities participating in the process. Being the problem treated by the model so complex, which involves all security policies that the company must have. PUPSI has an interative and developing approach. This approach was obtained from the instantiation of the RUP - Rational Unified Process model. RUP is a platform for software development object oriented, of Rational Software (IBM group). Which uses the best practice known by the market. PUPSI got from RUP a structure of process that offers functionality, diffusion capacity and comprehension, performance and agility for the process adjustment, offering yet capacity of adjustment to technological and structural charges of the market and the company
Resumo:
Nowadays due to the security vulnerability of distributed systems, it is needed mechanisms to guarantee the security requirements of distributed objects communications. Middleware Platforms component integration platforms provide security functions that typically offer services for auditing, for guarantee messages protection, authentication, and access control. In order to support these functions, middleware platforms use digital certificates that are provided and managed by external entities. However, most middleware platforms do not define requirements to get, to maintain, to validate and to delegate digital certificates. In addition, most digital certification systems use X.509 certificates that are complex and have a lot of attributes. In order to address these problems, this work proposes a digital certification generic service for middleware platforms. This service provides flexibility via the joint use of public key certificates, to implement the authentication function, and attributes certificates to the authorization function. It also supports delegation. Certificate based access control is transparent for objects. The proposed service defines the digital certificate format, the store and retrieval system, certificate validation and support for delegation. In order to validate the proposed architecture, this work presents the implementation of the digital certification service for the CORBA middleware platform and a case study that illustrates the service functionalities
Resumo:
The software systems development with domain-specific languages has become increasingly common. Domain-specific languages (DSLs) provide increased of the domain expressiveness, raising the abstraction level by facilitating the generation of models or low-level source code, thus increasing the productivity of systems development. Consequently, methods for the development of software product lines and software system families have also proposed the adoption of domain-specific languages. Recent studies have investigated the limitations of feature model expressiveness and proposing the use of DSLs as a complement or substitute for feature model. However, in complex projects, a single DSL is often insufficient to represent the different views and perspectives of development, being necessary to work with multiple DSLs. In order to address new challenges in this context, such as the management of consistency between DSLs, and the need to methods and tools that support the development with multiple DSLs, over the past years, several approaches have been proposed for the development of generative approaches. However, none of them considers matters relating to the composition of DSLs. Thus, with the aim to address this problem, the main objectives of this dissertation are: (i) to investigate the adoption of the integrated use of feature models and DSLs during the domain and application engineering of the development of generative approaches; (ii) to propose a method for the development of generative approaches with composition DSLs; and (iii) to investigate and evaluate the usage of modern technology based on models driven engineering to implement strategies of integration between feature models and composition of DSLs
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Resumo:
The importance of non-functional requirements for computer systems is increasing. Satisfying these requirements requires special attention to the software architecture, since an unsuitable architecture introduces greater complexity in addition to the intrinsic complexity of the system. Some studies have shown that, despite requirements engineering and software architecture activities act on different aspects of development, they must be performed iteratively and intertwined to produce satisfactory software systems. The STREAM process presents a systematic approach to reduce the gap between requirements and architecture development, emphasizing the functional requirements, but using the non-functional requirements in an ad hoc way. However, non-functional requirements typically influence the system as a whole. Thus, the STREAM uses Architectural Patterns to refine the software architecture. These patterns are chosen by using non-functional requirements in an ad hoc way. This master thesis presents a process to improve STREAM in making the choice of architectural patterns systematic by using non-functional requirements, in order to guide the refinement of a software architecture
Resumo:
The occurrence of problems related to the scattering and tangling phenomenon, such as the difficulty to do system maintenance, increasingly frequent. One way to solve this problem is related to the crosscutting concerns identification. To maximize its benefits, the identification must be performed from early stages of development process, but some works have reported that this has not been done in most of cases, making the system development susceptible to the errors incidence and prone to the refactoring later. This situation affects directly to the quality and cost of the system. PL-AOVgraph is a goal-oriented requirements modeling language which offers support to the relationships representation among requirements and provides separation of crosscutting concerns by crosscutting relationships representation. Therefore, this work presents a semi-automatic method to crosscutting concern identification in requirements specifications written in PL-AOVgraph. An adjacency matrix is used to identify the contributions relationships among the elements. The crosscutting concern identification is based in fan-out analysis of contribution relationships from the informations of adjacency matrix. When identified, the crosscutting relationships are created. And also, this method is implemented as a new module of ReqSys-MDD tool
Resumo:
Self-adaptive software system is able to change its structure and/or behavior at runtime due to changes in their requirements, environment or components. One way to archieve self-adaptation is the use a sequence of actions (known as adaptation plans) which are typically defined at design time. This is the approach adopted by Cosmos - a Framework to support the configuration and management of resources in distributed environments. In order to deal with the variability inherent of self-adaptive systems, such as, the appearance of new components that allow the establishment of configurations that were not envisioned at development time, this dissertation aims to give Cosmos the capability of generating adaptation plans of runtime. In this way, it was necessary to perform a reengineering of the Cosmos Framework in order to allow its integration with a mechanism for the dynamic generation of adaptation plans. In this context, our work has been focused on conducting a reengineering of Cosmos. Among the changes made to in the Cosmos, we can highlight: changes in the metamodel used to represent components and applications, which has been redefined based on an architectural description language. These changes were propagated to the implementation of a new Cosmos prototype, which was then used for developing a case study application for purpose of proof of concept. Another effort undertaken was to make Cosmos more attractive by integrating it with another platform, in the case of this dissertation, the OSGi platform, which is well-known and accepted by the industry
Resumo:
This work consists on the study of two important problems arising from the operations of petroleum and natural gas industries. The first problem the pipe dimensioning problem on constrained gas distribution networks consists in finding the least cost combination of diameters from a discrete set of commercially available ones for the pipes of a given gas network, such that it respects minimum pressure requirements at each demand node and upstream pipe conditions. On its turn, the second problem the piston pump unit routing problem comes from the need of defining the piston pump unit routes for visiting a number of non-emergent wells in on-shore fields, i.e., wells which don t have enough pressure to make the oil emerge to surface. The periodic version of this problem takes into account the wells re-filling equation to provide a more accurate planning in the long term. Besides the mathematical formulation of both problems, an exact algorithm and a taboo search were developed for the solution of the first problem and a theoretical limit and a ProtoGene transgenetic algorithm were developed for the solution of the second problem. The main concepts of the metaheuristics are presented along with the details of their application to the cited problems. The obtained results for both applications are promising when compared to theoretical limits and alternate solutions, either relative to the quality of the solutions or to associated running time
Resumo:
There is a growing interest of the Computer Science education community for including testing concepts on introductory programming courses. Aiming at contributing to this issue, we introduce POPT, a Problem-Oriented Programming and Testing approach for Introductory Programming Courses. POPT main goal is to improve the traditional method of teaching introductory programming that concentrates mainly on implementation and neglects testing. POPT extends POP (Problem Oriented Programing) methodology proposed on the PhD Thesis of Andrea Mendonça (UFCG). In both methodologies POPT and POP, students skills in dealing with ill-defined problems must be developed since the first programming courses. In POPT however, students are stimulated to clarify ill-defined problem specifications, guided by de definition of test cases (in a table-like manner). This paper presents POPT, and TestBoot a tool developed to support the methodology. In order to evaluate the approach a case study and a controlled experiment (which adopted the Latin Square design) were performed. In an Introductory Programming course of Computer Science and Software Engineering Graduation Programs at the Federal University of Rio Grande do Norte, Brazil. The study results have shown that, when compared to a Blind Testing approach, POPT stimulates the implementation of programs of better external quality the first program version submitted by POPT students passed in twice the number of test cases (professor-defined ones) when compared to non-POPT students. Moreover, POPT students submitted fewer program versions and spent more time to submit the first version to the automatic evaluation system, which lead us to think that POPT students are stimulated to think better about the solution they are implementing. The controlled experiment confirmed the influence of the proposed methodology on the quality of the code developed by POPT students