586 resultados para Ensino de ciência da computação e engenharia de software


Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is increasingly common use of a single computer system using different devices - personal computers, telephones cellular and others - and software platforms - systems graphical user interfaces, Web and other systems. Depending on the technologies involved, different software architectures may be employed. For example, in Web systems, it utilizes architecture client-server - usually extended in three layers. In systems with graphical interfaces, it is common architecture with the style MVC. The use of architectures with different styles hinders the interoperability of systems with multiple platforms. Another aggravating is that often the user interface in each of the devices have structure, appearance and behaviour different on each device, which leads to a low usability. Finally, the user interfaces specific to each of the devices involved, with distinct features and technologies is a job that needs to be done individually and not allow scalability. This study sought to address some of these problems by presenting a reference architecture platform-independent and that allows the user interface can be built from an abstract specification described in the language in the specification of the user interface, the MML. This solution is designed to offer greater interoperability between different platforms, greater consistency between the user interfaces and greater flexibility and scalability for the incorporation of new devices

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we propose a technique that uses uncontrolled small format aerial images, or SFAI, and stereohotogrammetry techniques to construct georeferenced mosaics. Images are obtained using a simple digital camera coupled with a radio controlled (RC) helicopter. Techniques for removing common distortions are applied and the relative orientation of the models are recovered using projective geometry. Ground truth points are used to get absolute orientation, plus a definition of scale and a coordinate system which relates image measures to the ground. The mosaic is read into a GIS system, providing useful information to different types of users, such as researchers, governmental agencies, employees, fishermen and tourism enterprises. Results are reported, illustrating the applicability of the system. The main contribution is the generation of georeferenced mosaics using SFAIs, which have not yet broadly explored in cartography projects. The proposed architecture presents a viable and much less expensive solution, when compared to systems using controlled pictures

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A 3D binary image is considered well-composed if, and only if, the union of the faces shared by the foreground and background voxels of the image is a surface in R3. Wellcomposed images have some desirable topological properties, which allow us to simplify and optimize algorithms that are widely used in computer graphics, computer vision and image processing. These advantages have fostered the development of algorithms to repair bi-dimensional (2D) and three-dimensional (3D) images that are not well-composed. These algorithms are known as repairing algorithms. In this dissertation, we propose two repairing algorithms, one randomized and one deterministic. Both algorithms are capable of making topological repairs in 3D binary images, producing well-composed images similar to the original images. The key idea behind both algorithms is to iteratively change the assigned color of some points in the input image from 0 (background)to 1 (foreground) until the image becomes well-composed. The points whose colors are changed by the algorithms are chosen according to their values in the fuzzy connectivity map resulting from the image segmentation process. The use of the fuzzy connectivity map ensures that a subset of points chosen by the algorithm at any given iteration is the one with the least affinity with the background among all possible choices

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-classifier systems, also known as ensembles, have been widely used to solve several problems, because they, often, present better performance than the individual classifiers that form these systems. But, in order to do so, it s necessary that the base classifiers to be as accurate as diverse among themselves this is also known as diversity/accuracy dilemma. Given its importance, some works have investigate the ensembles behavior in context of this dilemma. However, the majority of them address homogenous ensemble, i.e., ensembles composed only of the same type of classifiers. Thus, motivated by this limitation, this thesis, using genetic algorithms, performs a detailed study on the dilemma diversity/accuracy for heterogeneous ensembles

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ubiquitous computing systems operate in environments where the available resources significantly change during the system operation, thus requiring adaptive and context aware mechanisms to sense changes in the environment and adapt to new execution contexts. Motivated by this requirement, a framework for developing and executing adaptive context aware applications is proposed. The PACCA framework employs aspect-oriented techniques to modularize the adaptive behavior and to keep apart the application logic from this behavior. PACCA uses abstract aspect concept to provide flexibility by addition of new adaptive concerns that extend the abstract aspect. Furthermore, PACCA has a default aspect model that considers habitual adaptive concerns in ubiquitous applications. It exploits the synergy between aspect-orientation and dynamic composition to achieve context-aware adaptation, guided by predefined policies and aim to allow software modules on demand load making possible better use of mobile devices and yours limited resources. A Development Process for the ubiquitous applications conception is also proposed and presents a set of activities that guide adaptive context-aware developer. Finally, a quantitative study evaluates the approach based on aspects and dynamic composition for the construction of ubiquitous applications based in metrics

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently the focus given to Web Services and Semantic Web technologies has provided the development of several research projects in different ways to addressing the Web services composition issue. Meanwhile, the challenge of creating an environment that provides the specification of an abstract business process and that it is automatically implemented by a composite service in a dynamic way is considered a currently open problem. WSDL and BPEL provided by industry support only manual service composition because they lack needed semantics so that Web services are discovered, selected and combined by software agents. Services ontology provided by Semantic Web enriches the syntactic descriptions of Web services to facilitate the automation of tasks, such as discovery and composition. This work presents an environment for specifying and ad-hoc executing Web services-based business processes, named WebFlowAH. The WebFlowAH employs common domain ontology to describe both Web services and business processes. It allows processes specification in terms of users goals or desires that are expressed based on the concepts of such common domain ontology. This approach allows processes to be specified in an abstract high level way, unburdening the user from the underline details needed to effectively run the process workflow

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The constant increase of complexity in computer applications demands the development of more powerful hardware support for them. With processor's operational frequency reaching its limit, the most viable solution is the use of parallelism. Based on parallelism techniques and the progressive growth in the capacity of transistors integration in a single chip is the concept of MPSoCs (Multi-Processor System-on-Chip). MPSoCs will eventually become a cheaper and faster alternative to supercomputers and clusters, and applications developed for these high performance systems will migrate to computers equipped with MP-SoCs containing dozens to hundreds of computation cores. In particular, applications in the area of oil and natural gas exploration are also characterized by the high processing capacity required and would benefit greatly from these high performance systems. This work intends to evaluate a traditional and complex application of the oil and gas industry known as reservoir simulation, developing a solution with integrated computational systems in a single chip, with hundreds of functional unities. For this, as the STORM (MPSoC Directory-Based Platform) platform already has a shared memory model, a new distributed memory model were developed. Also a message passing library has been developed folowing MPI standard

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Middleware platforms have been widely used as an underlying infrastructure to the development of distributed applications. They provide distribution and heterogeneity transparency and a set of services that ease the construction of distributed applications. Nowadays, the middlewares accommodate an increasing variety of requirements to satisfy distinct application domains. This broad range of application requirements increases the complexity of the middleware, due to the introduction of many cross-cutting concerns in the architecture, which are not properly modularized by traditional programming techniques, resulting in a tangling and spread of theses concerns in the middleware code. The presence of these cross-cutting concerns limits the middleware scalability and aspect-oriented paradigm has been used successfully to improve the modularity, extensibility and customization capabilities of middleware. This work presents AO-OiL, an aspect-oriented (AO) middleware architecture, based on the AO middleware reference architecture. This middleware follows the philosophy that the middleware functionalities must be driven by the application requirements. AO-OiL consists in an AO refactoring of the OiL (Orb in Lua) middleware in order to separate basic and crosscutting concerns. The proposed architecture was implemented in Lua and RE-AspectLua. To evaluate the refactoring impact in the middleware architecture, this paper presents a comparative analysis of performance between AO-OiL and OiL

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of the researches in artificial intelligence is to qualify the computer to execute functions that are performed by humans using knowledge and reasoning. This work was developed in the area of machine learning, that it s the study branch of artificial intelligence, being related to the project and development of algorithms and techniques capable to allow the computational learning. The objective of this work is analyzing a feature selection method for ensemble systems. The proposed method is inserted into the filter approach of feature selection method, it s using the variance and Spearman correlation to rank the feature and using the reward and punishment strategies to measure the feature importance for the identification of the classes. For each ensemble, several different configuration were used, which varied from hybrid (homogeneous) to non-hybrid (heterogeneous) structures of ensemble. They were submitted to five combining methods (voting, sum, sum weight, multiLayer Perceptron and naïve Bayes) which were applied in six distinct database (real and artificial). The classifiers applied during the experiments were k- nearest neighbor, multiLayer Perceptron, naïve Bayes and decision tree. Finally, the performance of ensemble was analyzed comparatively, using none feature selection method, using a filter approach (original) feature selection method and the proposed method. To do this comparison, a statistical test was applied, which demonstrate that there was a significant improvement in the precision of the ensembles

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The intervalar arithmetic well-known as arithmetic of Moore, doesn't possess the same properties of the real numbers, and for this reason, it is confronted with a problem of operative nature, when we want to solve intervalar equations as extension of real equations by the usual equality and of the intervalar arithmetic, for this not to possess the inverse addictive, as well as, the property of the distributivity of the multiplication for the sum doesn t be valid for any triplet of intervals. The lack of those properties disables the use of equacional logic, so much for the resolution of an intervalar equation using the same, as for a representation of a real equation, and still, for the algebraic verification of properties of a computational system, whose data are real numbers represented by intervals. However, with the notion of order of information and of approach on intervals, introduced by Acióly[6] in 1991, the idea of an intervalar equation appears to represent a real equation satisfactorily, since the terms of the intervalar equation carry the information about the solution of the real equation. In 1999, Santiago proposed the notion of simple equality and, later on, local equality for intervals [8] and [33]. Based on that idea, this dissertation extends Santiago's local groups for local algebras, following the idea of Σ-algebras according to (Hennessy[31], 1988) and (Santiago[7], 1995). One of the contributions of this dissertation, is the theorem 5.1.3.2 that it guarantees that, when deducing a local Σ-equation E t t in the proposed system SDedLoc(E), the interpretations of t and t' will be locally the same in any local Σ-algebra that satisfies the group of fixed equations local E, whenever t and t have meaning in A. This assures to a kind of safety between the local equacional logic and the local algebras

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Currently with the increase in complexity in doing business, organizations are seeking information systems that help to quickly respond to new demands in the processes of production of products and services. An information system is no longer just a support tool and has become an integral part of doing business. However, in spite of significant technological evolution in recent years, information systems that support business do not respond efficiently to the constant alterations that occur in many organizations. One of the main problems faced by information systems currently is the lack of strategic alignment between business strategy and information technology. The concept of strategic alignment can be defined as a way between business strategies and objectives and the strategies, objectives and functions of information technology in such as way as to contribute to the increase in competitivity of the organization over time. Strategic alignment together with strategic planning are important management instruments. Approaches for operationalizing this alignment are being developed currently but are still in their initial stages due to the fact that it is a relatively new concept in the literature. Another point that needs to be taken into consideration during the strategic alignment is the question of trackability between the business elements and IT. Trackability (Tracking) is necessary for example when one wishes to know exactly which goal defined in the business strategy was left out or not accepted due to a modification made in the IT strategy. Very few proposals present concrete ways supported by software systems in order to obtain strategic alignement while taking into consideration this trackability. Therefore the objective of this work is to propose the creation of a strategic alignment process supported by a software system which is capable of permitting trackability between the organizational objectives and the business processes based on formalization standards defined through a model oriented approach

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Some programs may have their entry data specified by formalized context-free grammars. This formalization facilitates the use of tools in the systematization and the rise of the quality of their test process. This category of programs, compilers have been the first to use this kind of tool for the automation of their tests. In this work we present an approach for definition of tests from the formal description of the entries of the program. The generation of the sentences is performed by taking into account syntactic aspects defined by the specification of the entries, the grammar. For optimization, their coverage criteria are used to limit the quantity of tests without diminishing their quality. Our approach uses these criteria to drive generation to produce sentences that satisfy a specific coverage criterion. The approach presented is based on the use of Lua language, relying heavily on its resources of coroutines and dynamic construction of functions. With these resources, we propose a simple and compact implementation that can be optimized and controlled in different ways, in order to seek satisfaction the different implemented coverage criteria. To make the use of our tool simpler, the EBNF notation for the specification of the entries was adopted. Its parser was specified in the tool Meta-Environment for rapid prototyping

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the increasing complexity of software systems, there is also an increased concern about its faults. These faults can cause financial losses and even loss of life. Therefore, we propose in this paper the minimization of faults in software by using formally specified tests. The combination of testing and formal specifications is gaining strength in searches mainly through the MBT (Model-Based Testing). The development of software from formal specifications, when the whole process of refinement is done rigorously, ensures that what is specified in the application will be implemented. Thus, the implementation generated from these specifications would accurately depict what was specified. But not always the specification is refined to the level of implementation and code generation, and in these cases the tests generated from the specification tend to find fault. Additionally, the generation of so-called "invalid tests", ie tests that exercise the application scenarios that were not addressed in the specification, complements more significantly the formal development process. Therefore, this paper proposes a method for generating tests from B formal specifications. This method was structured in pseudo-code. The method is based on the systematization of the techniques of black box testing of boundary value analysis, equivalence partitioning, as well as the technique of orthogonal pairs. The method was applied to a B specification and B test machines that generate test cases independent of implementation language were generated. Aiming to validate the method, test cases were transformed manually in JUnit test cases and the application, created from the B specification and developed in Java, was tested. Faults were found with the execution of the JUnit test cases

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite the emergence of other forms of artificial lift, sucker rod pumping systems remains hegemonic because of its flexibility of operation and lower investment cost compared to other lifting techniques developed. A successful rod pumping sizing necessarily passes through the supply of estimated flow and the controlled wear of pumping equipment used in the mounted configuration. However, the mediation of these elements is particularly challenging, especially for most designers dealing with this work, which still lack the experience needed to get good projects pumping in time. Even with the existence of various computer applications on the market in order to facilitate this task, they must face a grueling process of trial and error until you get the most appropriate combination of equipment for installation in the well. This thesis proposes the creation of an expert system in the design of sucker rod pumping systems. Its mission is to guide a petroleum engineer in the task of selecting a range of equipment appropriate to the context provided by the characteristics of the oil that will be raised to the surface. Features such as the level of gas separation, presence of corrosive elements, possibility of production of sand and waxing are taken into account in selecting the pumping unit, sucker-rod strings and subsurface pump and their operation mode. It is able to approximate the inferente process in the way of human reasoning, which leads to results closer to those obtained by a specialist. For this, their production rules were based on the theory of fuzzy sets, able to model vague concepts typically present in human reasoning. The calculations of operating parameters of the pumping system are made by the API RP 11L method. Based on information input, the system is able to return to the user a set of pumping configurations that meet a given design flow, but without subjecting the selected equipment to an effort beyond that which can bear

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The distribution of petroleum products through pipeline networks is an important problem that arises in production planning of refineries. It consists in determining what will be done in each production stage given a time horizon, concerning the distribution of products from source nodes to demand nodes, passing through intermediate nodes. Constraints concerning storage limits, delivering time, sources availability, limits on sending or receiving, among others, have to be satisfied. This problem can be viewed as a biobjective problem that aims at minimizing the time needed to for transporting the set of packages through the network and the successive transmission of different products in the same pipe is called fragmentation. This work are developed three algorithms that are applied to this problem: the first algorithm is discrete and is based on Particle Swarm Optimization (PSO), with local search procedures and path-relinking proposed as velocity operators, the second and the third algorithms deal of two versions based on the Non-dominated Sorting Genetic Algorithm II (NSGA-II). The proposed algorithms are compared to other approaches for the same problem, in terms of the solution quality and computational time spent, so that the efficiency of the developed methods can be evaluated