160 resultados para Ciência do Sistema Terra
Resumo:
The importance of non-functional requirements for computer systems is increasing. Satisfying these requirements requires special attention to the software architecture, since an unsuitable architecture introduces greater complexity in addition to the intrinsic complexity of the system. Some studies have shown that, despite requirements engineering and software architecture activities act on different aspects of development, they must be performed iteratively and intertwined to produce satisfactory software systems. The STREAM process presents a systematic approach to reduce the gap between requirements and architecture development, emphasizing the functional requirements, but using the non-functional requirements in an ad hoc way. However, non-functional requirements typically influence the system as a whole. Thus, the STREAM uses Architectural Patterns to refine the software architecture. These patterns are chosen by using non-functional requirements in an ad hoc way. This master thesis presents a process to improve STREAM in making the choice of architectural patterns systematic by using non-functional requirements, in order to guide the refinement of a software architecture
Resumo:
The occurrence of problems related to the scattering and tangling phenomenon, such as the difficulty to do system maintenance, increasingly frequent. One way to solve this problem is related to the crosscutting concerns identification. To maximize its benefits, the identification must be performed from early stages of development process, but some works have reported that this has not been done in most of cases, making the system development susceptible to the errors incidence and prone to the refactoring later. This situation affects directly to the quality and cost of the system. PL-AOVgraph is a goal-oriented requirements modeling language which offers support to the relationships representation among requirements and provides separation of crosscutting concerns by crosscutting relationships representation. Therefore, this work presents a semi-automatic method to crosscutting concern identification in requirements specifications written in PL-AOVgraph. An adjacency matrix is used to identify the contributions relationships among the elements. The crosscutting concern identification is based in fan-out analysis of contribution relationships from the informations of adjacency matrix. When identified, the crosscutting relationships are created. And also, this method is implemented as a new module of ReqSys-MDD tool
Resumo:
There is a growing interest of the Computer Science education community for including testing concepts on introductory programming courses. Aiming at contributing to this issue, we introduce POPT, a Problem-Oriented Programming and Testing approach for Introductory Programming Courses. POPT main goal is to improve the traditional method of teaching introductory programming that concentrates mainly on implementation and neglects testing. POPT extends POP (Problem Oriented Programing) methodology proposed on the PhD Thesis of Andrea Mendonça (UFCG). In both methodologies POPT and POP, students skills in dealing with ill-defined problems must be developed since the first programming courses. In POPT however, students are stimulated to clarify ill-defined problem specifications, guided by de definition of test cases (in a table-like manner). This paper presents POPT, and TestBoot a tool developed to support the methodology. In order to evaluate the approach a case study and a controlled experiment (which adopted the Latin Square design) were performed. In an Introductory Programming course of Computer Science and Software Engineering Graduation Programs at the Federal University of Rio Grande do Norte, Brazil. The study results have shown that, when compared to a Blind Testing approach, POPT stimulates the implementation of programs of better external quality the first program version submitted by POPT students passed in twice the number of test cases (professor-defined ones) when compared to non-POPT students. Moreover, POPT students submitted fewer program versions and spent more time to submit the first version to the automatic evaluation system, which lead us to think that POPT students are stimulated to think better about the solution they are implementing. The controlled experiment confirmed the influence of the proposed methodology on the quality of the code developed by POPT students
Resumo:
One way to deal with the high complexity of current software systems is through selfadaptive systems. Self-adaptive system must be able to monitor themselves and their environment, analyzing the monitored data to determine the need for adaptation, decide how the adaptation will be performed, and finally, make the necessary adjustments. One way to perform the adaptation of a system is generating, at runtime, the process that will perform the adaptation. One advantage of this approach is the possibility to take into account features that can only be evaluated at runtime, such as the emergence of new components that allow new architectural arrangements which were not foreseen at design time. In this work we have as main objective the use of a framework for dynamic generation of processes to generate architectural adaptation plans on OSGi environment. Our main interest is evaluate how this framework for dynamic generation of processes behave in new environments
Resumo:
The component-based development of systems revolutionized the software development process, facilitating the maintenance, providing more confiability and reuse. Nevertheless, even with all the advantages of the development of components, their composition is an important concern. The verification through informal tests is not enough to achieve a safe composition, because they are not based on formal semantic models with which we are able to describe precisally a system s behaviour. In this context, formal methods provide ways to accurately specify systems through mathematical notations providing, among other benefits, more safety. The formal method CSP enables the specification of concurrent systems and verification of properties intrinsic to them, as well as the refinement among different models. Some approaches apply constraints using CSP, to check the behavior of composition between components, assisting in the verification of those components in advance. Hence, aiming to assist this process, considering that the software market increasingly requires more automation, reducing work and providing agility in business, this work presents a tool that automatizes the verification of composition among components, in which all complexity of formal language is kept hidden from users. Thus, through a simple interface, the tool BST (BRIC-Tool-Suport) helps to create and compose components, predicting, in advance, undesirable behaviors in the system, such as deadlocks
Resumo:
The main goal of Regression Test (RT) is to reuse the test suite of the latest version of a software in its current version, in order to maximize the value of the tests already developed and ensure that old features continue working after the new changes. Even with reuse, it is common that not all tests need to be executed again. Because of that, it is encouraged to use Regression Tests Selection (RTS) techniques, which aims to select from all tests, only those that reveal faults, this reduces costs and makes this an interesting practice for the testing teams. Several recent research works evaluate the quality of the selections performed by RTS techniques, identifying which one presents the best results, measured by metrics such as inclusion and precision. The RTS techniques should seek in the System Under Test (SUT) for tests that reveal faults. However, because this is a problem without a viable solution, they alternatively seek for tests that reveal changes, where faults may occur. Nevertheless, these changes may modify the execution flow of the algorithm itself, leading some tests no longer exercise the same stretch. In this context, this dissertation investigates whether changes performed in a SUT would affect the quality of the selection of tests performed by an RTS, if so, which features the changes present which cause errors, leading the RTS to include or exclude tests wrongly. For this purpose, a tool was developed using the Java language to automate the measurement of inclusion and precision averages achieved by a regression test selection technique for a particular feature of change. In order to validate this tool, an empirical study was conducted to evaluate the RTS technique Pythia, based on textual differencing, on a large web information system, analyzing the feature of types of tasks performed to evolve the SUT
Resumo:
Due to the constantly increasing use of wireless networks in domestic, business and industrial environments, new challenges have emerged. The prototyping of new protocols in these environments is typically restricted to simulation environments, where there is the need of double implementation, one in the simulation environment where an initial proof of concept is performed and the other one in a real environment. Also, if real environments are used, it is not trivial to create a testbed for high density wireless networks given the need to use various real equipment as well as attenuators and power reducers to try to reduce the physical space required to create these laboratories. In this context, LVWNet (Linux Virtual Wireless Network) project was originally designed to create completely virtual testbeds for IEEE 802.11 networks on the Linux operating system. This paper aims to extend the current project LVWNet, adding to it the features like the ability to interact with real wireless hardware, provides a initial mobility ability using the positioning of the nodes in a space coordinates environment based on meters, with loss calculations due to attenuation in free space, enables some scalability increase by creating an own protocol that allows the communication between nodes without an intermediate host and dynamic registration of nodes, allowing new nodes to be inserted into in already in operation network
Resumo:
Control and automation of residential environments domotics is emerging area of computing application. The development of computational systems for domotics is complex, due to the diversity of potential users, and because it is immerse in a context of emotional relationships and familiar construction. Currently, the focus of the development of this kind of system is directed, mainly, to physical and technological aspects. Due to the fact, gestural interaction in the present research is investigated under the view of Human-Computer Interaction (HCI). First, we approach the subject through the construction of a conceptual framework for discussion of challenges from the area, integrated to the dimensions: people, interaction mode and domotics. A further analysis of the domain is accomplished using the theoretical-methodological referential of Organizational Semiotics. After, we define recommendations to the diversity that base/inspire the inclusive design, guided by physical, perceptual and cognitive abilities, which aim to better represent the concerned diversity. Although developers have the support of gestural recognition technologies that help a faster development, these professionals face another difficulty by not restricting the gestural commands of the application to the standard gestures provided by development frameworks. Therefore, an abstraction of the gestural interaction was idealized through a formalization, described syntactically by construction blocks that originates a grammar of the gestural interaction and, semantically, approached under the view of the residential system. So, we define a set of metrics grounded in the recommendations that are described with information from the preestablished grammar, and still, we conceive and implement in Java, under the foundation of this grammar, a residential system based on gestural interaction for usage with Microsoft Kinect. Lastly, we accomplish an experiment with potential end users of the system, aiming to better analyze the research results
Resumo:
Geographic Information System (GIS) are computational tools used to capture, store, consult, manipulate, analyze and print geo-referenced data. A GIS is a multi-disciplinary system that can be used by different communities of users, each one having their own interest and knowledge. This way, different knowledge views about the same reality need to be combined, in such way to attend each community. This work presents a mechanism that allows different community users access the same geographic database without knowing its particular internal structure. We use geographic ontologies to support a common and shared understanding of a specific domain: the coral reefs. Using these ontologies' descriptions that represent the knowledge of the different communities, mechanisms are created to handle with such different concepts. We use equivalent classes mapping, and a semantic layer that interacts with the ontologies and the geographic database, and that gives to the user the answers about his/her queries, independently of the used terms
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Resumo:
This work shows a project method proposed to design and build software components from the software functional m del up to assembly code level in a rigorous fashion. This method is based on the B method, which was developed with support and interest of British Petroleum (BP). One goal of this methodology is to contribute to solve an important problem, known as The Verifying Compiler. Besides, this work describes a formal model of Z80 microcontroller and a real system of petroleum area. To achieve this goal, the formal model of Z80 was developed and documented, as it is one key component for the verification upto the assembly level. In order to improve the mentioned methodology, it was applied on a petroleum production test system, which is presented in this work. Part of this technique is performed manually. However, almost of these activities can be automated by a specific compiler. To build such compiler, the formal modelling of microcontroller and modelling of production test system should provide relevant knowledge and experiences to the design of a new compiler. In ummary, this work should improve the viability of one of the most stringent criteria for formal verification: speeding up the verification process, reducing design time and increasing the quality and reliability of the product of the final software. All these qualities are very important for systems that involve serious risks or in need of a high confidence, which is very common in the petroleum industry
Resumo:
Atualmente, há diferentes definições de implicações fuzzy aceitas na literatura. Do ponto de vista teórico, esta falta de consenso demonstra que há discordâncias sobre o real significado de "implicação lógica" nos contextos Booleano e fuzzy. Do ponto de vista prático, isso gera dúvidas a respeito de quais "operadores de implicação" os engenheiros de software devem considerar para implementar um Sistema Baseado em Regras Fuzzy (SBRF). Uma escolha ruim destes operadores pode implicar em SBRF's com menor acurácia e menos apropriados aos seus domínios de aplicação. Uma forma de contornar esta situação e conhecer melhor os conectivos lógicos fuzzy. Para isso se faz necessário saber quais propriedades tais conectivos podem satisfazer. Portanto, a m de corroborar com o significado de implicação fuzzy e corroborar com a implementação de SBRF's mais apropriados, várias leis Booleanas têm sido generalizadas e estudadas como equações ou inequações nas lógicas fuzzy. Tais generalizações são chamadas de leis Boolean-like e elas não são comumente válidas em qualquer semântica fuzzy. Neste cenário, esta dissertação apresenta uma investigação sobre as condições suficientes e necessárias nas quais três leis Booleanlike like — y ≤ I(x, y), I(x, I(y, x)) = 1 e I(x, I(y, z)) = I(I(x, y), I(x, z)) — se mantém válidas no contexto fuzzy, considerando seis classes de implicações fuzzy e implicações geradas por automorfismos. Além disso, ainda no intuito de implementar SBRF's mais apropriados, propomos uma extensão para os mesmos
Resumo:
I thank to my advisor, João Marcos, for the intellectual support and patience that devoted me along graduate years. With his friendship, his ability to see problems of the better point of view and his love in to make Logic, he became a great inspiration for me. I thank to my committee members: Claudia Nalon, Elaine Pimentel and Benjamin Bedregal. These make a rigorous lecture of my work and give me valuable suggestions to make it better. I am grateful to the Post-Graduate Program in Systems and Computation that accepted me as student and provided to me the propitious environment to develop my research. I thank also to the CAPES for a 21 months fellowship. Thanks to my research group, LoLITA (Logic, Language, Information, Theory and Applications). In this group I have the opportunity to make some friends. Someone of them I knew in my early classes, they are: Sanderson, Haniel and Carol Blasio. Others I knew during the course, among them I’d like to cite: Patrick, Claudio, Flaulles and Ronildo. I thank to Severino Linhares and Maria Linhares who gently hosted me at your home in my first months in Natal. This couple jointly with my colleagues of student flat Fernado, Donátila and Aline are my nuclear family in Natal. I thank my fiancée Luclécia for her precious a ective support and to understand my absence at home during my master. I thank also my parents Manoel and Zenilda, my siblings Alexandre, Paulo and Paula.Without their confidence and encouragement I wouldn’t achieve success in this journey. If you want the hits, be prepared for the misses Carl Yastrzemski
Resumo:
This work discusses the application of techniques of ensembles in multimodal recognition systems development in revocable biometrics. Biometric systems are the future identification techniques and user access control and a proof of this is the constant increases of such systems in current society. However, there is still much advancement to be developed, mainly with regard to the accuracy, security and processing time of such systems. In the search for developing more efficient techniques, the multimodal systems and the use of revocable biometrics are promising, and can model many of the problems involved in traditional biometric recognition. A multimodal system is characterized by combining different techniques of biometric security and overcome many limitations, how: failures in the extraction or processing the dataset. Among the various possibilities to develop a multimodal system, the use of ensembles is a subject quite promising, motivated by performance and flexibility that they are demonstrating over the years, in its many applications. Givin emphasis in relation to safety, one of the biggest problems found is that the biometrics is permanently related with the user and the fact of cannot be changed if compromised. However, this problem has been solved by techniques known as revocable biometrics, which consists of applying a transformation on the biometric data in order to protect the unique characteristics, making its cancellation and replacement. In order to contribute to this important subject, this work compares the performance of individual classifiers methods, as well as the set of classifiers, in the context of the original data and the biometric space transformed by different functions. Another factor to be highlighted is the use of Genetic Algorithms (GA) in different parts of the systems, seeking to further maximize their eficiency. One of the motivations of this development is to evaluate the gain that maximized ensembles systems by different GA can bring to the data in the transformed space. Another relevant factor is to generate revocable systems even more eficient by combining two or more functions of transformations, demonstrating that is possible to extract information of a similar standard through applying different transformation functions. With all this, it is clear the importance of revocable biometrics, ensembles and GA in the development of more eficient biometric systems, something that is increasingly important in the present day
Resumo:
The region of the Senador Pompeu Shear Zone (SPSZ), in the North Tectonic Domain of the Borborema Province (BP), has its recent history associated with to South Atlantic Ocean formation event at the Jurassic. A lot of geologics models have discussed about crustal axis elevation in local scale and large scale (Borborema Province), relative to importants regionals tectonics directions of it. The identification and the relationship among this surfaces, stepped in many topographyc levels by tectonics mecanisms, is dificult because of the erosion process on it. Over there, sedimentary deposits is complex and it has not biostratigraphyc record in continental deposits. The analysis metodology on apatita fission-track, in the region of the SPSZ, purpose the more knowledge about morphotectonics mecanisms of the area and the impruvement of its morphotectonics models. For this, it was moleled the age and thermal history of the 11 apatites samples collected on both sides of this shear zone, taking relationships among other results of the thermochronology studies in the BP. Based on the thermal studies in this search, the region of the BP developed on two distint cooling events, separated for one period of relative stabilited. The first episode occur between 130 and 90 M.y., has been began when the samples cross the 120°C isoterm for last time and fineshed at 70°C. The second moment of the cooling process was began about 30 M.y., when the temperature was 90°C, from this to the equlibrium with present surface temperature at 30°C. Some evidences indicated a relacionship between thermal episodes and uplift events of the regional relief. The fundaments of the interpretation was based mainly on comparatives studies among results of the thermochronology analysis and geologics studies about BP. Nóbrega et al.(2005), e.g., on studies about the Portalegre Shear Zone, got similar results on SPSZ, with some details relative to local tectonic activity. Morais Neto et al. (2000) interpreted two importants cooling events in the BP based on their regional studies, that can be associated to regional uplift events. When Assine (1992) studied the stratigraphyc sequences of the Araripe Basin, in the south of Ceará state, conclude that the abrupt return to continentals condictions from the last sedimentar sequency (albiano-cenomaniane) indicate a regional uplift of the NE region of the Brazil at the 100 M.y., in the Albiano Intermediate/Superior. This ages are compatible to termal model of the SPSZ. This two periods of the thermal history of the BP are completely registered in the apatites samples just one age groups of the fission-track, that it is the most ancient age groups. This one suggest it has happened in response to heating before 75 M.y and it has erased the last report of the first moment relief evolution of the BP. The NNE-SSW and E-W structure reativation can have created ideal condictions for heating and local elevations of the geothermal gradients. The equilibrium between the apatites temperatures of this groups and the regionais temperatures took place about 50 M.y., when the samples of the two ages groups had a simillar evolution to present surfaces temperatures