926 resultados para CNPQ::CIENCIAS EXATAS E DA TERRA::PROBABILIDADE E ESTATISTICA::ESTATISTICA


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Quadratic Minimum Spanning Tree Problem (QMST) is a version of the Minimum Spanning Tree Problem in which, besides the traditional linear costs, there is a quadratic structure of costs. This quadratic structure models interaction effects between pairs of edges. Linear and quadratic costs are added up to constitute the total cost of the spanning tree, which must be minimized. When these interactions are restricted to adjacent edges, the problem is named Adjacent Only Quadratic Minimum Spanning Tree (AQMST). AQMST and QMST are NP-hard problems that model several problems of transport and distribution networks design. In general, AQMST arises as a more suitable model for real problems. Although, in literature, linear and quadratic costs are added, in real applications, they may be conflicting. In this case, it may be interesting to consider these costs separately. In this sense, Multiobjective Optimization provides a more realistic model for QMST and AQMST. A review of the state-of-the-art, so far, was not able to find papers regarding these problems under a biobjective point of view. Thus, the objective of this Thesis is the development of exact and heuristic algorithms for the Biobjective Adjacent Only Quadratic Spanning Tree Problem (bi-AQST). In order to do so, as theoretical foundation, other NP-hard problems directly related to bi-AQST are discussed: the QMST and AQMST problems. Bracktracking and branch-and-bound exact algorithms are proposed to the target problem of this investigation. The heuristic algorithms developed are: Pareto Local Search, Tabu Search with ejection chain, Transgenetic Algorithm, NSGA-II and a hybridization of the two last-mentioned proposals called NSTA. The proposed algorithms are compared to each other through performance analysis regarding computational experiments with instances adapted from the QMST literature. With regard to exact algorithms, the analysis considers, in particular, the execution time. In case of the heuristic algorithms, besides execution time, the quality of the generated approximation sets is evaluated. Quality indicators are used to assess such information. Appropriate statistical tools are used to measure the performance of exact and heuristic algorithms. Considering the set of instances adopted as well as the criteria of execution time and quality of the generated approximation set, the experiments showed that the Tabu Search with ejection chain approach obtained the best results and the transgenetic algorithm ranked second. The PLS algorithm obtained good quality solutions, but at a very high computational time compared to the other (meta)heuristics, getting the third place. NSTA and NSGA-II algorithms got the last positions

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis proposes an architecture of a new multiagent system framework for hybridization of metaheuristics inspired on the general Particle Swarm Optimization framework (PSO). The main contribution is to propose an effective approach to solve hard combinatory optimization problems. The choice of PSO as inspiration was given because it is inherently multiagent, allowing explore the features of multiagent systems, such as learning and cooperation techniques. In the proposed architecture, particles are autonomous agents with memory and methods for learning and making decisions, using search strategies to move in the solution space. The concepts of position and velocity originally defined in PSO are redefined for this approach. The proposed architecture was applied to the Traveling Salesman Problem and to the Quadratic Assignment Problem, and computational experiments were performed for testing its effectiveness. The experimental results were promising, with satisfactory performance, whereas the potential of the proposed architecture has not been fully explored. For further researches, the proposed approach will be also applied to multiobjective combinatorial optimization problems, which are closer to real-world problems. In the context of applied research, we intend to work with both students at the undergraduate level and a technical level in the implementation of the proposed architecture in real-world problems

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, the importance of using software processes is already consolidated and is considered fundamental to the success of software development projects. Large and medium software projects demand the definition and continuous improvement of software processes in order to promote the productive development of high-quality software. Customizing and evolving existing software processes to address the variety of scenarios, technologies, culture and scale is a recurrent challenge required by the software industry. It involves the adaptation of software process models for the reality of their projects. Besides, it must also promote the reuse of past experiences in the definition and development of software processes for the new projects. The adequate management and execution of software processes can bring a better quality and productivity to the produced software systems. This work aimed to explore the use and adaptation of consolidated software product lines techniques to promote the management of the variabilities of software process families. In order to achieve this aim: (i) a systematic literature review is conducted to identify and characterize variability management approaches for software processes; (ii) an annotative approach for the variability management of software process lines is proposed and developed; and finally (iii) empirical studies and a controlled experiment assess and compare the proposed annotative approach against a compositional one. One study a comparative qualitative study analyzed the annotative and compositional approaches from different perspectives, such as: modularity, traceability, error detection, granularity, uniformity, adoption, and systematic variability management. Another study a comparative quantitative study has considered internal attributes of the specification of software process lines, such as modularity, size and complexity. Finally, the last study a controlled experiment evaluated the effort to use and the understandability of the investigated approaches when modeling and evolving specifications of software process lines. The studies bring evidences of several benefits of the annotative approach, and the potential of integration with the compositional approach, to assist the variability management of software process lines

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The camera motion estimation represents one of the fundamental problems in Computer Vision and it may be solved by several methods. Preemptive RANSAC is one of them, which in spite of its robustness and speed possesses a lack of flexibility related to the requirements of applications and hardware platforms using it. In this work, we propose an improvement to the structure of Preemptive RANSAC in order to overcome such limitations and make it feasible to execute on devices with heterogeneous resources (specially low budget systems) under tighter time and accuracy constraints. We derived a function called BRUMA from Preemptive RANSAC, which is able to generalize several preemption schemes, allowing previously fixed parameters (block size and elimination factor) to be changed according the applications constraints. We also propose the Generalized Preemptive RANSAC method, which allows to determine the maximum number of hipotheses an algorithm may generate. The experiments performed show the superiority of our method in the expected scenarios. Moreover, additional experiments show that the multimethod hypotheses generation achieved more robust results related to the variability in the set of evaluated motion directions

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Context-aware applications are typically dynamic and use services provided by several sources, with different quality levels. Context information qualities are expressed in terms of Quality of Context (QoC) metadata, such as precision, correctness, refreshment, and resolution. On the other hand, service qualities are expressed via Quality of Services (QoS) metadata such as response time, availability and error rate. In order to assure that an application is using services and context information that meet its requirements, it is essential to continuously monitor the metadata. For this purpose, it is needed a QoS and QoC monitoring mechanism that meet the following requirements: (i) to support measurement and monitoring of QoS and QoC metadata; (ii) to support synchronous and asynchronous operation, thus enabling the application to periodically gather the monitored metadata and also to be asynchronously notified whenever a given metadata becomes available; (iii) to use ontologies to represent information in order to avoid ambiguous interpretation. This work presents QoMonitor, a module for QoS and QoC metadata monitoring that meets the abovementioned requirement. The architecture and implementation of QoMonitor are discussed. To support asynchronous communication QoMonitor uses two protocols: JMS and Light-PubSubHubbub. In order to illustrate QoMonitor in the development of ubiquitous application it was integrated to OpenCOPI (Open COntext Platform Integration), a Middleware platform that integrates several context provision middleware. To validate QoMonitor we used two applications as proofof- concept: an oil and gas monitoring application and a healthcare application. This work also presents a validation of QoMonitor in terms of performance both in synchronous and asynchronous requests

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With hardware and software technologies advance, it s also happenning modifications in the development models of computational systems. New methodologies for user interface specification are being created with user interface description languages (UIDL). The UIDLs are a way to have a precise description in a language with more abstraction and independent of how will be implemented. A great problem is that even using these nowadays methodologies, we still have a big distance between the UIDLs and its design, what means, the distance between abstract and concrete. The tool BRIDGE (Interface Design Generator Environment) was created with the intention of being a linking bridge between a specification language (the Interactive Message Modeling Language IMML) and its implementation in Java, linking the abstract (specification) to the concrete (implementation). IMML is a language based on models, that allows the designer works in distinct abstraction levels, being each model a distinct abstraction level. IMML is a XML language, that uses the Semiotic Engineering concepts, that deals the computational system, with the user interface and its elements like a metacommunicative artifact, where these elements must to transmit a message to the user about what task must to be realized and the way to reach this goal. With BRIDGE, we intend to supply a lot of support to the design task, being the user interface prototipation the greater of them. BRIDGE allows the design becomes easier and more intuitive coming from an interface specification language

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Formal methods should be used to specify and verify on-card software in Java Card applications. Furthermore, Java Card programming style requires runtime verification of all input conditions for all on-card methods, where the main goal is to preserve the data in the card. Design by contract, and in particular, the JML language, are an option for this kind of development and verification, as runtime verification is part of the Design by contract method implemented by JML. However, JML and its currently available tools for runtime verification were not designed with Java Card limitations in mind and are not Java Card compliant. In this thesis, we analyze how much of this situation is really intrinsic of Java Card limitations and how much is just a matter of a complete re-design of JML and its tools. We propose the requirements for a new language which is Java Card compliant and indicate the lines on which a compiler for this language should be built. JCML strips from JML non-Java Card aspects such as concurrency and unsupported types. This would not be enough, however, without a great effort in optimization of the verification code generated by its compiler, as this verification code must run on the card. The JCML compiler, although being much more restricted than the one for JML, is able to generate Java Card compliant verification code for some lightweight specifications. As conclusion, we present a Java Card compliant variant of JML, JCML (Java Card Modeling Language), with a preliminary version of its compiler

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to great difficulty of accurate solution of Combinatorial Optimization Problems, some heuristic methods have been developed and during many years, the analysis of performance of these approaches was not carried through in a systematic way. The proposal of this work is to make a statistical analysis of heuristic approaches to the Traveling Salesman Problem (TSP). The focus of the analysis is to evaluate the performance of each approach in relation to the necessary computational time until the attainment of the optimal solution for one determined instance of the TSP. Survival Analysis, assisted by methods for the hypothesis test of the equality between survival functions was used. The evaluated approaches were divided in three classes: Lin-Kernighan Algorithms, Evolutionary Algorithms and Particle Swarm Optimization. Beyond those approaches, it was enclosed in the analysis, a memetic algorithm (for symmetric and asymmetric TSP instances) that utilizes the Lin-Kernighan heuristics as its local search procedure

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although some individual techniques of supervised Machine Learning (ML), also known as classifiers, or algorithms of classification, to supply solutions that, most of the time, are considered efficient, have experimental results gotten with the use of large sets of pattern and/or that they have a expressive amount of irrelevant data or incomplete characteristic, that show a decrease in the efficiency of the precision of these techniques. In other words, such techniques can t do an recognition of patterns of an efficient form in complex problems. With the intention to get better performance and efficiency of these ML techniques, were thought about the idea to using some types of LM algorithms work jointly, thus origin to the term Multi-Classifier System (MCS). The MCS s presents, as component, different of LM algorithms, called of base classifiers, and realized a combination of results gotten for these algorithms to reach the final result. So that the MCS has a better performance that the base classifiers, the results gotten for each base classifier must present an certain diversity, in other words, a difference between the results gotten for each classifier that compose the system. It can be said that it does not make signification to have MCS s whose base classifiers have identical answers to the sames patterns. Although the MCS s present better results that the individually systems, has always the search to improve the results gotten for this type of system. Aim at this improvement and a better consistency in the results, as well as a larger diversity of the classifiers of a MCS, comes being recently searched methodologies that present as characteristic the use of weights, or confidence values. These weights can describe the importance that certain classifier supplied when associating with each pattern to a determined class. These weights still are used, in associate with the exits of the classifiers, during the process of recognition (use) of the MCS s. Exist different ways of calculating these weights and can be divided in two categories: the static weights and the dynamic weights. The first category of weights is characterizes for not having the modification of its values during the classification process, different it occurs with the second category, where the values suffers modifications during the classification process. In this work an analysis will be made to verify if the use of the weights, statics as much as dynamics, they can increase the perfomance of the MCS s in comparison with the individually systems. Moreover, will be made an analysis in the diversity gotten for the MCS s, for this mode verify if it has some relation between the use of the weights in the MCS s with different levels of diversity

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Java Card technology allows the development and execution of small applications embedded in smart cards. A Java Card application is composed of an external card client and of an application in the card that implements the services available to the client by means of an Application Programming Interface (API). Usually, these applications manipulate and store important information, such as cash and confidential data of their owners. Thus, it is necessary to adopt rigor on developing a smart card application to improve its quality and trustworthiness. The use of formal methods on the development of these applications is a way to reach these quality requirements. The B method is one of the many formal methods for system specification. The development in B starts with the functional specification of the system, continues with the application of some optional refinements to the specification and, from the last level of refinement, it is possible to generate code for some programming language. The B formalism has a good tool support and its application to Java Card is adequate since the specification and development of APIs is one of the major applications of B. The BSmart method proposed here aims to promote the rigorous development of Java Card applications up to the generation of its code, based on the refinement of its formal specification described in the B notation. This development is supported by the BSmart tool, that is composed of some programs that automate each stage of the method; and by a library of B modules and Java Card classes that model primitive types, essential Java Card API classes and reusable data structures

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, we propose a multi agent system for digital image steganalysis, based on the poliginic bees model. Such approach aims to solve the problem of automatic steganalysis for digital media, with a case study on digital images. The system architecture was designed not only to detect if a file is suspicious of covering a hidden message, as well to extract the hidden message or information regarding it. Several experiments were performed whose results confirm a substantial enhancement (from 67% to 82% success rate) by using the multi-agent approach, fact not observed in traditional systems. An ongoing application using the technique is the detection of anomalies in digital data produced by sensors that capture brain emissions in little animals. The detection of such anomalies can be used to prove theories and evidences of imagery completion during sleep provided by the brain in visual cortex areas

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In systems that combine the outputs of classification methods (combination systems), such as ensembles and multi-agent systems, one of the main constraints is that the base components (classifiers or agents) should be diverse among themselves. In other words, there is clearly no accuracy gain in a system that is composed of a set of identical base components. One way of increasing diversity is through the use of feature selection or data distribution methods in combination systems. In this work, an investigation of the impact of using data distribution methods among the components of combination systems will be performed. In this investigation, different methods of data distribution will be used and an analysis of the combination systems, using several different configurations, will be performed. As a result of this analysis, it is aimed to detect which combination systems are more suitable to use feature distribution among the components

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A maioria da soluções apresentadas como candidatas à implementação de serviços de distribuição de áudio e vídeo, têm sido projetadas levando-se em consideração determinadas condições de infra-estrutura, formato dos fluxos de vídeo a serem transmitidos, ou ainda os tipos de clientes que serão atendidos pelo serviço. Aplicações que utilizam serviços de distribuição de vídeo normalmente precisam lidar com grandes oscilações na demanda pelo serviço devido a entrada e saída de usuários do serviço. Com exemplo, basta observar a enorme variação nos níveis de audiência de programas de televisão. Este comportamento coloca um importante requisito para esta classe de sistemas distribuídos: a capacidade de reconfiguração como conseqüência de variações na demanda. Esta dissertação apresenta um estudo que envolveu o uso de agentes móveis para implementar os servidores de um serviço de distribuição de vídeo denominada DynaVideo. Uma das principais características deste serviço é a capacidade de ajustar sua configuração em conseqüência de variações na demanda. Como os servidores DynaVideo podem replicar-se e são implementados como código móvel, seu posicionamento pode ser otimizado para atender uma dada demanda e, como conseqüência, a configuração do serviço pode ser ajustada para minimizar o consumo de recursos necessários para distribuir vídeo para seus usuários. A principal contribuição desta dissertação foi provar a viabilidade do conceito de servidores implementados como agentes móveis Java baseados no ambiente de desenvolvimento de software Aglet.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pervasive applications use context provision middleware support as infrastructures to provide context information. Typically, those applications use communication publish/subscribe to eliminate the direct coupling between components and to allow the selective information dissemination based in the interests of the communicating elements. The use of composite events mechanisms together with such middlewares to aggregate individual low level events, originating from of heterogeneous sources, in high level context information relevant for the application. CES (Composite Event System) is a composite events mechanism that works simultaneously in cooperation with several context provision middlewares. With that integration, applications use CES to subscribe to composite events and CES, in turn, subscribes to the primitive events in the appropriate underlying middlewares and notifies the applications when the composed events happen. Furthermore, CES offers a language with a group of operators for the definition of composite events that also allows context information sharing

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increase of capacity to integrate transistors permitted to develop completed systems, with several components, in single chip, they are called SoC (System-on-Chip). However, the interconnection subsystem cans influence the scalability of SoCs, like buses, or can be an ad hoc solution, like bus hierarchy. Thus, the ideal interconnection subsystem to SoCs is the Network-on-Chip (NoC). The NoCs permit to use simultaneous point-to-point channels between components and they can be reused in other projects. However, the NoCs can raise the complexity of project, the area in chip and the dissipated power. Thus, it is necessary or to modify the way how to use them or to change the development paradigm. Thus, a system based on NoC is proposed, where the applications are described through packages and performed in each router between source and destination, without traditional processors. To perform applications, independent of number of instructions and of the NoC dimensions, it was developed the spiral complement algorithm, which finds other destination until all instructions has been performed. Therefore, the objective is to study the viability of development that system, denominated IPNoSys system. In this study, it was developed a tool in SystemC, using accurate cycle, to simulate the system that performs applications, which was implemented in a package description language, also developed to this study. Through the simulation tool, several result were obtained that could be used to evaluate the system performance. The methodology used to describe the application corresponds to transform the high level application in data-flow graph that become one or more packages. This methodology was used in three applications: a counter, DCT-2D and float add. The counter was used to evaluate a deadlock solution and to perform parallel application. The DCT was used to compare to STORM platform. Finally, the float add aimed to evaluate the efficiency of the software routine to perform a unimplemented hardware instruction. The results from simulation confirm the viability of development of IPNoSys system. They showed that is possible to perform application described in packages, sequentially or parallelly, without interruptions caused by deadlock, and also showed that the execution time of IPNoSys is more efficient than the STORM platform