925 resultados para CNPQ::CIENCIAS EXATAS E DA TERRA::PROBABILIDADE E ESTATISTICA::ESTATISTICA
Resumo:
Committees of classifiers may be used to improve the accuracy of classification systems, in other words, different classifiers used to solve the same problem can be combined for creating a system of greater accuracy, called committees of classifiers. To that this to succeed is necessary that the classifiers make mistakes on different objects of the problem so that the errors of a classifier are ignored by the others correct classifiers when applying the method of combination of the committee. The characteristic of classifiers of err on different objects is called diversity. However, most measures of diversity could not describe this importance. Recently, were proposed two measures of the diversity (good and bad diversity) with the aim of helping to generate more accurate committees. This paper performs an experimental analysis of these measures applied directly on the building of the committees of classifiers. The method of construction adopted is modeled as a search problem by the set of characteristics of the databases of the problem and the best set of committee members in order to find the committee of classifiers to produce the most accurate classification. This problem is solved by metaheuristic optimization techniques, in their mono and multi-objective versions. Analyzes are performed to verify if use or add the measures of good diversity and bad diversity in the optimization objectives creates more accurate committees. Thus, the contribution of this study is to determine whether the measures of good diversity and bad diversity can be used in mono-objective and multi-objective optimization techniques as optimization objectives for building committees of classifiers more accurate than those built by the same process, but using only the accuracy classification as objective of optimization
Resumo:
The academic community and software industry have shown, in recent years, substantial interest in approaches and technologies related to the area of model-driven development (MDD). At the same time, continues the relentless pursuit of industry for technologies to raise productivity and quality in the development of software products. This work aims to explore those two statements, through an experiment carried by using MDD technology and evaluation of its use on solving an actual problem under the security context of enterprise systems. By building and using a tool, a visual DSL denominated CALV3, inspired by the software factory approach: a synergy between software product line, domainspecific languages and MDD, we evaluate the gains in abstraction and productivity through a systematic case study conducted in a development team. The results and lessons learned from the evaluation of this tool within industry are the main contributions of this work
Resumo:
With the advance of the Cloud Computing paradigm, a single service offered by a cloud platform may not be enough to meet all the application requirements. To fulfill such requirements, it may be necessary, instead of a single service, a composition of services that aggregates services provided by different cloud platforms. In order to generate aggregated value for the user, this composition of services provided by several Cloud Computing platforms requires a solution in terms of platforms integration, which encompasses the manipulation of a wide number of noninteroperable APIs and protocols from different platform vendors. In this scenario, this work presents Cloud Integrator, a middleware platform for composing services provided by different Cloud Computing platforms. Besides providing an environment that facilitates the development and execution of applications that use such services, Cloud Integrator works as a mediator by providing mechanisms for building applications through composition and selection of semantic Web services that take into account metadata about the services, such as QoS (Quality of Service), prices, etc. Moreover, the proposed middleware platform provides an adaptation mechanism that can be triggered in case of failure or quality degradation of one or more services used by the running application in order to ensure its quality and availability. In this work, through a case study that consists of an application that use services provided by different cloud platforms, Cloud Integrator is evaluated in terms of the efficiency of the performed service composition, selection and adaptation processes, as well as the potential of using this middleware in heterogeneous computational clouds scenarios
Resumo:
The increasing complexity of integrated circuits has boosted the development of communications architectures like Networks-on-Chip (NoCs), as an architecture; alternative for interconnection of Systems-on-Chip (SoC). Networks-on-Chip complain for component reuse, parallelism and scalability, enhancing reusability in projects of dedicated applications. In the literature, lots of proposals have been made, suggesting different configurations for networks-on-chip architectures. Among all networks-on-chip considered, the architecture of IPNoSys is a non conventional one, since it allows the execution of operations, while the communication process is performed. This study aims to evaluate the execution of data-flow based applications on IPNoSys, focusing on their adaptation against the design constraints. Data-flow based applications are characterized by the flowing of continuous stream of data, on which operations are executed. We expect that these type of applications can be improved when running on IPNoSys, because they have a programming model similar to the execution model of this network. By observing the behavior of these applications when running on IPNoSys, were performed changes in the execution model of the network IPNoSys, allowing the implementation of an instruction level parallelism. For these purposes, analysis of the implementations of dataflow applications were performed and compared
Resumo:
The field of Wireless Sensor and Actuator Networks (WSAN) is fast increasing and has attracted the interest of both the research community and the industry because of several factors, such as the applicability of such networks in different application domains (aviation, civil engineering, medicine, and others). Moreover, advances in wireless communication and the reduction of hardware components size also contributed for a fast spread of these networks. However, there are still several challenges and open issues that need to be tackled in order to achieve the full potential of WSAN usage. The development of WSAN systems is one of the most relevant of these challenges considering the number of variables involved in this process. Currently, a broad range of WSAN platforms and low level programming languages are available to build WSAN systems. Thus, developers need to deal with details of different sensor platforms and low-level programming abstractions of sensor operational systems on one hand, and they also need to have specific (high level) knowledge about the distinct application domains, on the other hand. Therefore, in order to decouple the handling of these two different levels of knowledge, making easier the development process of WSAN systems, we propose LWiSSy (Domain Language for Wireless Sensor and Actuator Networks Systems), a domain specific language (DSL) for WSAN. The use of DSLs raises the abstraction level during the programming of systems and modularizes the system building in several steps. Thus, LWiSSy allows the domain experts to directly contribute in the development of WSANs without having knowledge on low level sensor platforms, and network experts to program sensor nodes to meet application requirements without having specific knowledge on the application domain. Additionally, LWiSSy enables the system decomposition in different levels of abstraction according to structural and behavioral features and granularities (network, node group and single node level programming)
Resumo:
The visualization of three-dimensional(3D)images is increasigly being sed in the area of medicine, helping physicians diagnose desease. the advances achived in scaners esed for acquisition of these 3d exames, such as computerized tumography(CT) and Magnetic Resonance imaging (MRI), enable the generation of images with higher resolutions, thus, generating files with much larger sizes. Currently, the images of computationally expensive one, and demanding the use of a righ and computer for such task. The direct remote acess of these images thruogh the internet is not efficient also, since all images have to be trasferred to the user´s equipment before the 3D visualization process ca start. with these problems in mind, this work proposes and analyses a solution for the remote redering of 3D medical images, called Remote Rendering (RR3D). In RR3D, the whole hedering process is pefomed a server or a cluster of servers, with high computational power, and only the resulting image is tranferred to the client, still allowing the client to peform operations such as rotations, zoom, etc. the solution was developed using web services written in java and an architecture that uses the scientific visualization packcage paraview, the framework paraviewWeb and the PACS server DCM4CHEE.The solution was tested with two scenarios where the rendering process was performed by a sever with graphics hadwere (GPU) and by a server without GPUs. In the scenarios without GPUs, the soluction was executed in parallel with several number of cores (processing units)dedicated to it. In order to compare our solution to order medical visualization application, a third scenario was esed in the rendering process, was done locally. In all tree scenarios, the solution was tested for different network speeds. The solution solved satisfactorily the problem with the delay in the transfer of the DICOM files, while alowing the use of low and computers as client for visualizing the exams even, tablets and smart phones
Resumo:
Neste trabalho, apresentamos uma ferramenta cujo intuito é auxiliar não-programadores, jogadores de videogame, na criação de extensões na forma de Add-ons para World of Warcraft, o jogo online. Nele, o usuário pode criar extensões customizando completamente sua interface, de forma a reinventar a sua experiência de jogo e melhorar sua jogabilidade. A criação de extensões para aplicativos e jogos surgiu da crescente necessidade de fornecer aos usuários mecanismos eficientes de Programação por Usuário Final, permitindo que os mesmos preenchessem suas necessidades singulares através da criação, customização e especificação de extensões em software. Em World of Warcraft mais especificamente, os Add-ons exploram um tipo de extensão na qual os jogadores passam a programar sua própria interface de usuário ou a fazer uso de interfaces criadas por outros usuários. No entanto, realizar a programação dessas extensões - os Add-ons - não é uma tarefa fácil. Dentro deste contexto, desenvolvemos a ferramenta EUPAT for WoW (do inglês, End-User Programming Assistance Tool for World of Warcraft) que oferece assistência à criação de Add-ons. Além disso, investigamos como usuários jogadores com e sem conhecimento de programação são beneficiados. Os resultados desta pesquisa permitiram refletir sobre as estratégias de assistência de programação por usuário final no contexto de jogos
Resumo:
The use of multi-agent systems for classification tasks has been proposed in order to overcome some drawbacks of multi-classifier systems and, as a consequence, to improve performance of such systems. As a result, the NeurAge system was proposed. This system is composed by several neural agents which communicate and negotiate a common result for the testing patterns. In the NeurAge system, a negotiation method is very important to the overall performance of the system since the agents need to reach and agreement about a problem when there is a conflict among the agents. This thesis presents an extensive analysis of the NeurAge System where it is used all kind of classifiers. This systems is now named ClassAge System. It is aimed to analyze the reaction of this system to some modifications in its topology and configuration
Resumo:
The Reconfigurable Computing is an intermediate solution at the resolution of complex problems, making possible to combine the speed of the hardware with the flexibility of the software. An reconfigurable architecture possess some goals, among these the increase of performance. The use of reconfigurable architectures to increase the performance of systems is a well known technology, specially because of the possibility of implementing certain slow algorithms in the current processors directly in hardware. Amongst the various segments that use reconfigurable architectures the reconfigurable processors deserve a special mention. These processors combine the functions of a microprocessor with a reconfigurable logic and can be adapted after the development process. Reconfigurable Instruction Set Processors (RISP) are a subgroup of the reconfigurable processors, that have as goal the reconfiguration of the instruction set of the processor, involving issues such formats, operands and operations of the instructions. This work possess as main objective the development of a RISP processor, combining the techniques of configuration of the set of executed instructions of the processor during the development, and reconfiguration of itself in execution time. The project and implementation in VHDL of this RISP processor has as intention to prove the applicability and the efficiency of two concepts: to use more than one set of fixed instructions, with only one set active in a given time, and the possibility to create and combine new instructions, in a way that the processor pass to recognize and use them in real time as if these existed in the fixed set of instruction. The creation and combination of instructions is made through a reconfiguration unit, incorporated to the processor. This unit allows the user to send custom instructions to the processor, so that later he can use them as if they were fixed instructions of the processor. In this work can also be found simulations of applications involving fixed and custom instructions and results of the comparisons between these applications in relation to the consumption of power and the time of execution, which confirm the attainment of the goals for which the processor was developed
Resumo:
New programming language paradigms have commonly been tested and eventually incorporated into hardware description languages. Recently, aspect-oriented programming (AOP) has shown successful in improving the modularity of object-oriented and structured languages such Java, C++ and C. Thus, one can expect that, using AOP, one can improve the understanding of the hardware systems under design, as well as make its components more reusable and easier to maintain. We apply AOP in applications developed using the SystemC library. Several examples will be presented illustrating how to combine AOP and SystemC. During the presentation of these examples, the benefits of this new approach will also be discussed
Resumo:
This work presents the tVoice, software that manipulates tags languages, extracting information and, being integral part of the VoiceProxy system, it aids bearers of special needs in the access to the Web. This system is responsible for the search and treatment of the documents in the Web, extracting the textual information contained in those documents and preceding the capability of generating eventually through translation techniques, an audio script, used by the of interface subsystem of VoiceProxy, the iVoice, in the process of voice synthesis. In this stage the tVoice, besides the treatment of the tag language HTML, processes other two formats of documents, PDF and XHTML. Additionally to allow that, besides the iVoice, other interface subsystems can make use of the tVoice through remote access, we propose distribution systems techniques based in the model Client-Server providers operations of the fashion of a proxy server treatment of documents
Resumo:
Este trabalho aborda o problema de otimização em braquiterapia de alta taxa de dose no tratamento de pacientes com câncer, com vistas à definição do conjunto de tempos de parada. A técnica de solução adotada foi a Transgenética Computacional apoiada pelo método L-BFGS. O algoritmo desenvolvido foi empregado para gerar soluções não denominadas cujas distribuições de dose fossem capazes de eiminar o câncer e ao mesmo tempo preservar as regiões normais
Resumo:
O domínio alvo deste trabalho são os sistemas colaborativos distribuídos onde o foco está na troca dê mensagens entre usuários remotamente distribuídos. Nestes sistemas, há a necessidade das mensagens possuírem conteúdo multimídia e poderem ser entregues tanto a um usuário específico quanto a um grupo ou grupos de usuários. O objetivo deste trabalho é desenvolver um framework que facilite: a construção desse tipo de sistymas e diminua o tempo gasto com desenvolvimento através da técnica de reuso. Este trabalho apresenta o N2N Framework - Uma plataforma para desenvolvimento de Sistemas Colaborativos Distribuídos. O Framework foi concebido através da análise do comportamento de aplicações com características de multimídias colaborativas, como ambientes virtuais multi-usuários, chats, enquetes, e torcidas virtuais. O Framework foi implementado usando-se a plataforma Java. O N2N Framework facilita o design e implementação de sistemas colaborativos distribuídos, implementando a entrega das mensagens, e direcionando o desenvolvedor de aplicações para a preocupação com implementação de suas mensagens específicas e o processamento que delas decorre
Uma abordagem para a verificação do comportamento excepcional a partir de regras de designe e testes
Resumo:
Checking the conformity between implementation and design rules in a system is an important activity to try to ensure that no degradation occurs between architectural patterns defined for the system and what is actually implemented in the source code. Especially in the case of systems which require a high level of reliability is important to define specific design rules for exceptional behavior. Such rules describe how exceptions should flow through the system by defining what elements are responsible for catching exceptions thrown by other system elements. However, current approaches to automatically check design rules do not provide suitable mechanisms to define and verify design rules related to the exception handling policy of applications. This paper proposes a practical approach to preserve the exceptional behavior of an application or family of applications, based on the definition and runtime automatic checking of design rules for exception handling of systems developed in Java or AspectJ. To support this approach was developed, in the context of this work, a tool called VITTAE (Verification and Information Tool to Analyze Exceptions) that extends the JUnit framework and allows automating test activities to exceptional design rules. We conducted a case study with the primary objective of evaluating the effectiveness of the proposed approach on a software product line. Besides this, an experiment was conducted that aimed to realize a comparative analysis between the proposed approach and an approach based on a tool called JUnitE, which also proposes to test the exception handling code using JUnit tests. The results showed how the exception handling design rules evolve along different versions of a system and that VITTAE can aid in the detection of defects in exception handling code
Resumo:
The activity of requirements engineering is seen in agile methods as bureaucratic activity making the process less agile. However, the lack of documentation in agile development environment is identified as one of the main challenges of the methodology. Thus, it is observed that there is a contradiction between what agile methodology claims and the result, which occurs in the real environment. For example, in agile methods the user stories are widely used to describe requirements. However, this way of describing requirements is still not enough, because the user stories is an artifact too narrow to represent and detail the requirements. The activities of verifying issues like software context and dependencies between stories are also limited with the use of only this artifact. In the context of requirements engineering there are goal oriented approaches that bring benefits to the requirements documentation, including, completeness of requirements, analysis of alternatives and support to the rationalization of requirements. Among these approaches, it excels the i * modeling technique that provides a graphical view of the actors involved in the system and their dependencies. This work is in the context of proposing an additional resource that aims to reduce this lack of existing documentation in agile methods. Therefore, the objective of this work is to provide a graphical view of the software requirements and their relationships through i * models, thus enriching the requirements in agile methods. In order to do so, we propose a set of heuristics to perform the mapping of the requirements presented as user stories in i * models. These models can be used as a form of documentation in agile environment, because by mapping to i * models, the requirements will be viewed more broadly and with their proper relationships according to the business environment that they will meet