965 resultados para Arquitetura e tecnologia


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis proposes an architecture of a new multiagent system framework for hybridization of metaheuristics inspired on the general Particle Swarm Optimization framework (PSO). The main contribution is to propose an effective approach to solve hard combinatory optimization problems. The choice of PSO as inspiration was given because it is inherently multiagent, allowing explore the features of multiagent systems, such as learning and cooperation techniques. In the proposed architecture, particles are autonomous agents with memory and methods for learning and making decisions, using search strategies to move in the solution space. The concepts of position and velocity originally defined in PSO are redefined for this approach. The proposed architecture was applied to the Traveling Salesman Problem and to the Quadratic Assignment Problem, and computational experiments were performed for testing its effectiveness. The experimental results were promising, with satisfactory performance, whereas the potential of the proposed architecture has not been fully explored. For further researches, the proposed approach will be also applied to multiobjective combinatorial optimization problems, which are closer to real-world problems. In the context of applied research, we intend to work with both students at the undergraduate level and a technical level in the implementation of the proposed architecture in real-world problems

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Investigates the relationship between Information Architecture in digital environments with Intellectual Property Rights. The work is justified by the need to better understand the emerging dynamics of Digital Information and Communication Law Technologies and Intellectual Property Rights. Three areas of knowledge are directly related to the study: Information Science, Law and Computer Science. The methodology used in the investigative process is aligned with the qualitative approach. With respect to the technical procedures the research is classified as bibliographic or secondary sources. The results showed that the current Brazilian legislation does not provide the adequate mechanisms necessary to protect the intellectual property rights associated to an Information Architecture project to its holders.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increase of applications complexity has demanded hardware even more flexible and able to achieve higher performance. Traditional hardware solutions have not been successful in providing these applications constraints. General purpose processors have inherent flexibility, since they perform several tasks, however, they can not reach high performance when compared to application-specific devices. Moreover, since application-specific devices perform only few tasks, they achieve high performance, although they have less flexibility. Reconfigurable architectures emerged as an alternative to traditional approaches and have become an area of rising interest over the last decades. The purpose of this new paradigm is to modify the device s behavior according to the application. Thus, it is possible to balance flexibility and performance and also to attend the applications constraints. This work presents the design and implementation of a coarse grained hybrid reconfigurable architecture to stream-based applications. The architecture, named RoSA, consists of a reconfigurable logic attached to a processor. Its goal is to exploit the instruction level parallelism from intensive data-flow applications to accelerate the application s execution on the reconfigurable logic. The instruction level parallelism extraction is done at compile time, thus, this work also presents an optimization phase to the RoSA architecture to be included in the GCC compiler. To design the architecture, this work also presents a methodology based on hardware reuse of datapaths, named RoSE. RoSE aims to visualize the reconfigurable units through reusability levels, which provides area saving and datapath simplification. The architecture presented was implemented in hardware description language (VHDL). It was validated through simulations and prototyping. To characterize performance analysis some benchmarks were used and they demonstrated a speedup of 11x on the execution of some applications

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is increasingly common use of a single computer system using different devices - personal computers, telephones cellular and others - and software platforms - systems graphical user interfaces, Web and other systems. Depending on the technologies involved, different software architectures may be employed. For example, in Web systems, it utilizes architecture client-server - usually extended in three layers. In systems with graphical interfaces, it is common architecture with the style MVC. The use of architectures with different styles hinders the interoperability of systems with multiple platforms. Another aggravating is that often the user interface in each of the devices have structure, appearance and behaviour different on each device, which leads to a low usability. Finally, the user interfaces specific to each of the devices involved, with distinct features and technologies is a job that needs to be done individually and not allow scalability. This study sought to address some of these problems by presenting a reference architecture platform-independent and that allows the user interface can be built from an abstract specification described in the language in the specification of the user interface, the MML. This solution is designed to offer greater interoperability between different platforms, greater consistency between the user interfaces and greater flexibility and scalability for the incorporation of new devices

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Middleware platforms have been widely used as an underlying infrastructure to the development of distributed applications. They provide distribution and heterogeneity transparency and a set of services that ease the construction of distributed applications. Nowadays, the middlewares accommodate an increasing variety of requirements to satisfy distinct application domains. This broad range of application requirements increases the complexity of the middleware, due to the introduction of many cross-cutting concerns in the architecture, which are not properly modularized by traditional programming techniques, resulting in a tangling and spread of theses concerns in the middleware code. The presence of these cross-cutting concerns limits the middleware scalability and aspect-oriented paradigm has been used successfully to improve the modularity, extensibility and customization capabilities of middleware. This work presents AO-OiL, an aspect-oriented (AO) middleware architecture, based on the AO middleware reference architecture. This middleware follows the philosophy that the middleware functionalities must be driven by the application requirements. AO-OiL consists in an AO refactoring of the OiL (Orb in Lua) middleware in order to separate basic and crosscutting concerns. The proposed architecture was implemented in Lua and RE-AspectLua. To evaluate the refactoring impact in the middleware architecture, this paper presents a comparative analysis of performance between AO-OiL and OiL

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Currently with the increase in complexity in doing business, organizations are seeking information systems that help to quickly respond to new demands in the processes of production of products and services. An information system is no longer just a support tool and has become an integral part of doing business. However, in spite of significant technological evolution in recent years, information systems that support business do not respond efficiently to the constant alterations that occur in many organizations. One of the main problems faced by information systems currently is the lack of strategic alignment between business strategy and information technology. The concept of strategic alignment can be defined as a way between business strategies and objectives and the strategies, objectives and functions of information technology in such as way as to contribute to the increase in competitivity of the organization over time. Strategic alignment together with strategic planning are important management instruments. Approaches for operationalizing this alignment are being developed currently but are still in their initial stages due to the fact that it is a relatively new concept in the literature. Another point that needs to be taken into consideration during the strategic alignment is the question of trackability between the business elements and IT. Trackability (Tracking) is necessary for example when one wishes to know exactly which goal defined in the business strategy was left out or not accepted due to a modification made in the IT strategy. Very few proposals present concrete ways supported by software systems in order to obtain strategic alignement while taking into consideration this trackability. Therefore the objective of this work is to propose the creation of a strategic alignment process supported by a software system which is capable of permitting trackability between the organizational objectives and the business processes based on formalization standards defined through a model oriented approach

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Education is one of the oldest activities practiced by man, but today it is still performed often without creating dialogues and discussions among all those involved, and students are passives agents without interactivity with teachers and the content approached. This work presents a tool used for providing interactivity in educational environments using cell phones, in this way, teachers can use technology to assist in process of education and have a better evaluation of students. The tool developed architecture is shown, exposing features of wireless communication technologies used and how is the connection management using Bluetooth technology, which has a limited number of simultaneous connections. The details of multiple Bluetooth connections and how the system should behave by numerous users are displayed, showing a comparison between different methods of managing connections. Finally, the results obtained with the use of the tool are presented, followed by the analysis of them and a conclusion on the work

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The vascular segmentation is important in diagnosing vascular diseases like stroke and is hampered by noise in the image and very thin vessels that can pass unnoticed. One way to accomplish the segmentation is extracting the centerline of the vessel with height ridges, which uses the intensity as features for segmentation. This process can take from seconds to minutes, depending on the current technology employed. In order to accelerate the segmentation method proposed by Aylward [Aylward & Bullitt 2002] we have adapted it to run in parallel using CUDA architecture. The performance of the segmentation method running on GPU is compared to both the same method running on CPU and the original Aylward s method running also in CPU. The improvemente of the new method over the original one is twofold: the starting point for the segmentation process is not a single point in the blood vessel but a volume, thereby making it easier for the user to segment a region of interest, and; the overall gain method was 873 times faster running on GPU and 150 times more fast running on the CPU than the original CPU in Aylward

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is a need for multi-agent system designers in determining the quality of systems in the earliest phases of the development process. The architectures of the agents are also part of the design of these systems, and therefore also need to have their quality evaluated. Motivated by the important role that emotions play in our daily lives, embodied agents researchers have aimed to create agents capable of producing affective and natural interaction with users that produces a beneficial or desirable result. For this, several studies proposing architectures of agents with emotions arose without the accompaniment of appropriate methods for the assessment of these architectures. The objective of this study is to propose a methodology for evaluating architectures emotional agents, which evaluates the quality attributes of the design of architectures, in addition to evaluation of human-computer interaction, the effects on the subjective experience of users of applications that implement it. The methodology is based on a model of well-defined metrics. In assessing the quality of architectural design, the attributes assessed are: extensibility, modularity and complexity. In assessing the effects on users' subjective experience, which involves the implementation of the architecture in an application and we suggest to be the domain of computer games, the metrics are: enjoyment, felt support, warm, caring, trust, cooperation, intelligence, interestingness, naturalness of emotional reactions, believabiliy, reducing of frustration and likeability, and the average time and average attempts. We experimented with this approach and evaluate five architectures emotional agents: BDIE, DETT, Camurra-Coglio, EBDI, Emotional-BDI. Two of the architectures, BDIE and EBDI, were implemented in a version of the game Minesweeper and evaluated for human-computer interaction. In the results, DETT stood out with the best architectural design. Users who have played the version of the game with emotional agents performed better than those who played without agents. In assessing the subjective experience of users, the differences between the architectures were insignificant

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The tracking between models of the requirements and architecture activities is a strategy that aims to prevent loss of information, reducing the gap between these two initial activities of the software life cycle. In the context of Software Product Lines (SPL), it is important to have this support, which allows the correspondence between this two activities, with management of variability. In order to address this issue, this paper presents a process of bidirectional mapping, defining transformation rules between elements of a goaloriented requirements model (described in PL-AOVgraph) and elements of an architectural description (defined in PL-AspectualACME). These mapping rules are evaluated using a case study: the GingaForAll LPS. To automate this transformation, we developed the MaRiPLA tool (Mapping Requirements to Product Line Architecture), through MDD techniques (Modeldriven Development), including Atlas Transformation Language (ATL) with specification of Ecore metamodels jointly with Xtext , a DSL definition framework, and Acceleo, a code generation tool, in Eclipse environment. Finally, the generated models are evaluated based on quality attributes such as variability, derivability, reusability, correctness, traceability, completeness, evolvability and maintainability, extracted from the CAFÉ Quality Model

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the world we are constantly performing everyday actions. Two of these actions are frequent and of great importance: classify (sort by classes) and take decision. When we encounter problems with a relatively high degree of complexity, we tend to seek other opinions, usually from people who have some knowledge or even to the extent possible, are experts in the problem domain in question in order to help us in the decision-making process. Both the classification process as the process of decision making, we are guided by consideration of the characteristics involved in the specific problem. The characterization of a set of objects is part of the decision making process in general. In Machine Learning this classification happens through a learning algorithm and the characterization is applied to databases. The classification algorithms can be employed individually or by machine committees. The choice of the best methods to be used in the construction of a committee is a very arduous task. In this work, it will be investigated meta-learning techniques in selecting the best configuration parameters of homogeneous committees for applications in various classification problems. These parameters are: the base classifier, the architecture and the size of this architecture. We investigated nine types of inductors candidates for based classifier, two methods of generation of architecture and nine medium-sized groups for architecture. Dimensionality reduction techniques have been applied to metabases looking for improvement. Five classifiers methods are investigated as meta-learners in the process of choosing the best parameters of a homogeneous committee.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increasing complexity of integrated circuits has boosted the development of communications architectures like Networks-on-Chip (NoCs), as an architecture; alternative for interconnection of Systems-on-Chip (SoC). Networks-on-Chip complain for component reuse, parallelism and scalability, enhancing reusability in projects of dedicated applications. In the literature, lots of proposals have been made, suggesting different configurations for networks-on-chip architectures. Among all networks-on-chip considered, the architecture of IPNoSys is a non conventional one, since it allows the execution of operations, while the communication process is performed. This study aims to evaluate the execution of data-flow based applications on IPNoSys, focusing on their adaptation against the design constraints. Data-flow based applications are characterized by the flowing of continuous stream of data, on which operations are executed. We expect that these type of applications can be improved when running on IPNoSys, because they have a programming model similar to the execution model of this network. By observing the behavior of these applications when running on IPNoSys, were performed changes in the execution model of the network IPNoSys, allowing the implementation of an instruction level parallelism. For these purposes, analysis of the implementations of dataflow applications were performed and compared

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Discutem-se as mudanças constatadas no ensino da Histologia, como a tecnologia tem sido empregada nos contextos de aprendizagem, os aspectos pedagógicos inerentes à utilização de recursos, tais como atlas digitais e microscópios virtuais, e apresenta-se pesquisa sobre o desenvolvimento de um ambiente virtual de ensino-aprendizagem de Histologia, que contou com a participação de alunos e professores em sua construção. Verificou-se que os ambientes virtuais e outros recursos didáticos baseados nas Tecnologias da Informação e da Comunicação (TICs) procuram atender à atual tendência de complementar a educação presencial com ferramentas de educação a distância, que podem ser utilizadas facultativamente no estudo extraclasse continuado. Concluiu-se que, embora as novas tecnologias possam contribuir para o ensino de Histologia, os materiais didáticos baseados em TICs devem se adequar às expectativas docentes e discentes e aos aspectos pedagógicos e ergonômicos, e precisam ser adotados pelos professores não como ferramentas isoladas, mas integrados às estratégias de ensino- -aprendizagem

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research presents a reflection on the dropout rates in higher education courses available through distance education in a context marked by deep changes in various spheres of society and by the uncontrolled growth of this educational method. For this, the object of analysis is related to the reality of students of Technology in Environmental Management offered by the Instituto Federal de Educação, Ciência e Tecnologia, Rio Grande do Norte (RN), Brazil, through the distance education in presence support poles located in Mossoró (RN) and Martins (RN). In this field research development several strategies of data collection are used, such as participant observation of reality; analysis of documents related to distance education and to the classes in Technology in Environmental Management; questionnaires answered by students who dropped the course. The results of such research enable us to say that the dropout rates in distance higher education is mainly referred to the consequences of a combination of aspects involved in the course development, personal difficulties faced by students during the period they attend the lessons and elements inherent in the context in which the course and students are inserted. Nevertheless there are specific situations in which the student may drop the program due to the influence of a single aspect, whether it is inherent in the development of the course, a personal situation, or even a factor determined by the context in which the course or the student belongs to