873 resultados para Multi agent systems


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nowadays, classifying proteins in structural classes, which concerns the inference of patterns in their 3D conformation, is one of the most important open problems in Molecular Biology. The main reason for this is that the function of a protein is intrinsically related to its spatial conformation. However, such conformations are very difficult to be obtained experimentally in laboratory. Thus, this problem has drawn the attention of many researchers in Bioinformatics. Considering the great difference between the number of protein sequences already known and the number of three-dimensional structures determined experimentally, the demand of automated techniques for structural classification of proteins is very high. In this context, computational tools, especially Machine Learning (ML) techniques, have become essential to deal with this problem. In this work, ML techniques are used in the recognition of protein structural classes: Decision Trees, k-Nearest Neighbor, Naive Bayes, Support Vector Machine and Neural Networks. These methods have been chosen because they represent different paradigms of learning and have been widely used in the Bioinfornmatics literature. Aiming to obtain an improvment in the performance of these techniques (individual classifiers), homogeneous (Bagging and Boosting) and heterogeneous (Voting, Stacking and StackingC) multiclassification systems are used. Moreover, since the protein database used in this work presents the problem of imbalanced classes, artificial techniques for class balance (Undersampling Random, Tomek Links, CNN, NCL and OSS) are used to minimize such a problem. In order to evaluate the ML methods, a cross-validation procedure is applied, where the accuracy of the classifiers is measured using the mean of classification error rate, on independent test sets. These means are compared, two by two, by the hypothesis test aiming to evaluate if there is, statistically, a significant difference between them. With respect to the results obtained with the individual classifiers, Support Vector Machine presented the best accuracy. In terms of the multi-classification systems (homogeneous and heterogeneous), they showed, in general, a superior or similar performance when compared to the one achieved by the individual classifiers used - especially Boosting with Decision Tree and the StackingC with Linear Regression as meta classifier. The Voting method, despite of its simplicity, has shown to be adequate for solving the problem presented in this work. The techniques for class balance, on the other hand, have not produced a significant improvement in the global classification error. Nevertheless, the use of such techniques did improve the classification error for the minority class. In this context, the NCL technique has shown to be more appropriated

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In multi-robot systems, both control architecture and work strategy represent a challenge for researchers. It is important to have a robust architecture that can be easily adapted to requirement changes. It is also important that work strategy allows robots to complete tasks efficiently, considering that robots interact directly in environments with humans. In this context, this work explores two approaches for robot soccer team coordination for cooperative tasks development. Both approaches are based on a combination of imitation learning and reinforcement learning. Thus, in the first approach was developed a control architecture, a fuzzy inference engine for recognizing situations in robot soccer games, a software for narration of robot soccer games based on the inference engine and the implementation of learning by imitation from observation and analysis of others robotic teams. Moreover, state abstraction was efficiently implemented in reinforcement learning applied to the robot soccer standard problem. Finally, reinforcement learning was implemented in a form where actions are explored only in some states (for example, states where an specialist robot system used them) differently to the traditional form, where actions have to be tested in all states. In the second approach reinforcement learning was implemented with function approximation, for which an algorithm called RBF-Sarsa($lambda$) was created. In both approaches batch reinforcement learning algorithms were implemented and imitation learning was used as a seed for reinforcement learning. Moreover, learning from robotic teams controlled by humans was explored. The proposal in this work had revealed efficient in the robot soccer standard problem and, when implemented in other robotics systems, they will allow that these robotics systems can efficiently and effectively develop assigned tasks. These approaches will give high adaptation capabilities to requirements and environment changes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Multi-classifier systems, also known as ensembles, have been widely used to solve several problems, because they, often, present better performance than the individual classifiers that form these systems. But, in order to do so, it s necessary that the base classifiers to be as accurate as diverse among themselves this is also known as diversity/accuracy dilemma. Given its importance, some works have investigate the ensembles behavior in context of this dilemma. However, the majority of them address homogenous ensemble, i.e., ensembles composed only of the same type of classifiers. Thus, motivated by this limitation, this thesis, using genetic algorithms, performs a detailed study on the dilemma diversity/accuracy for heterogeneous ensembles

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we describe a scheduler simulator for real-time tasks, RTsim, that can be used as a tool to teach real-time scheduling algorithms. It simulates a variety of preprogrammed scheduling policies for single and multi-processor systems and simple algorithm variants introduced by its user. Using RTsim students can conduct experiments that will allow them to understand the effects of each policy given different load conditions and learn which policy is better for different workloads. We show how to use RTsim as a learning tool and the results achieved with its application on the Real-Time Systems course taught at the B.Sc. on Computer Science at Paulista State University - Unesp - at Rio Preto.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

When dealing with spatio-temporal simulations of load growth inside a service zone, one of the most important problems faced by a Distribution Utility is how to represent the different relationships among different areas. A new load in a certain part of the city could modify the load growth in other parts of the city, even outside of its radius of influence. These interactions are called Urban Dynamics. This work aims to discuss how to implement Urban Dynamics considerations into the spatial electric load forecasting simulations using multi-agent simulations. To explain the approach, three examples are introduced, including the effect of an attraction load, the effect of a repulsive load, and the effect of several attraction/repulsive loads at the same time when considering the natural load growth. © 2012 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Software transaction memory (STM) systems have been used as an approach to improve performance, by allowing the concurrent execution of atomic blocks. However, under high-contention workloads, STM-based systems can considerably degrade performance, as transaction conflict rate increases. Contention management policies have been used as a way to select which transaction to abort when a conflict occurs. In general, contention managers are not capable of avoiding conflicts, as they can only select which transaction to abort and the moment it should restart. Since contention managers act only after a conflict is detected, it becomes harder to effectively increase transaction throughput. More proactive approaches have emerged, aiming at predicting when a transaction is likely to abort, postponing its execution. Nevertheless, most of the proposed proactive techniques are limited, as they do not replace the doomed transaction by another or, when they do, they rely on the operating system for that, having little or no control on which transaction to run. This article proposes LUTS, a lightweight user-level transaction scheduler. Unlike other techniques, LUTS provides the means for selecting another transaction to run in parallel, thus improving system throughput. We discuss LUTS design and propose a dynamic conflict-avoidance heuristic built around its scheduling capabilities. Experimental results, conducted with the STAMP and STMBench7 benchmark suites, running on TinySTM and SwissTM, show how our conflict-avoidance heuristic can effectively improve STM performance on high contention applications. © 2012 Springer Science+Business Media, LLC.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objective The purpose of this study was to identify the clinical factors associated with time to hCG remission among women with low-risk postmolar GTN. Methods This study included a non-concurrent cohort of 328 patients diagnosed with low-risk postmolar GTN according to FIGO 2002 criteria. Associations of time to hCG remission with history of prior mole, molar histology, time to persistence, use of D&C at persistence, presence of metastatic disease, FIGO score, hCG values at persistence, type of first line therapy and use of multiagent chemotherapy were investigated with both univariate and multivariate analyses. Results Overall median time to remission was 46 days. Ten percent of the patients required multi-agent chemotherapy to achieve hCG remission. Multivariate analysis incorporating the variables significant on univariate analysis confirmed that complete molar histology (HR 1.45), metastatic disease (HR 1.66), use of multi-agent therapy (HR 2.00) and FIGO score (HR 1.82) were associated with longer time to remission. There was a linear relationship between FIGO score and time to hCG remission. Each 1-point increment in FIGO score was associated with an average 17-day increase in hCG remission time (95% CI: 12.5-21.6). Conclusions Complete mole histology prior to GTN, presence of metastatic disease, use of multi-agent therapy and higher FIGO score were independent factors associated with longer time to hCG remission in low-risk GTN. Identifying the prognostic factors associated with time to remission and effective counseling may help improve treatment planning and reduce anxiety in patients and their families. © 2013 Elsevier Inc. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Elétrica - FEIS

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Ciência da Informação - FFC

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Actual trends in software development are pushing the need to face a multiplicity of diverse activities and interaction styles characterizing complex and distributed application domains, in such a way that the resulting dynamics exhibits some grade of order, i.e. in terms of evolution of the system and desired equilibrium. Autonomous agents and Multiagent Systems are argued in literature as one of the most immediate approaches for describing such a kind of challenges. Actually, agent research seems to converge towards the definition of renewed abstraction tools aimed at better capturing the new demands of open systems. Besides agents, which are assumed as autonomous entities purposing a series of design objectives, Multiagent Systems account new notions as first-class entities, aimed, above all, at modeling institutional/organizational entities, placed for normative regulation, interaction and teamwork management, as well as environmental entities, placed as resources to further support and regulate agent work. The starting point of this thesis is recognizing that both organizations and environments can be rooted in a unifying perspective. Whereas recent research in agent systems seems to account a set of diverse approaches to specifically face with at least one aspect within the above mentioned, this work aims at proposing a unifying approach where both agents and their organizations can be straightforwardly situated in properly designed working environments. In this line, this work pursues reconciliation of environments with sociality, social interaction with environment based interaction, environmental resources with organizational functionalities with the aim to smoothly integrate the various aspects of complex and situated organizations in a coherent programming approach. Rooted in Agents and Artifacts (A&A) meta-model, which has been recently introduced both in the context of agent oriented software engineering and programming, the thesis promotes the notion of Embodied Organizations, characterized by computational infrastructures attaining a seamless integration between agents, organizations and environmental entities.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nano(bio)science and nano(bio)technology play a growing and tremendous interest both on academic and industrial aspects. They are undergoing rapid developments on many fronts such as genomics, proteomics, system biology, and medical applications. However, the lack of characterization tools for nano(bio)systems is currently considered as a major limiting factor to the final establishment of nano(bio)technologies. Flow Field-Flow Fractionation (FlFFF) is a separation technique that is definitely emerging in the bioanalytical field, and the number of applications on nano(bio)analytes such as high molar-mass proteins and protein complexes, sub-cellular units, viruses, and functionalized nanoparticles is constantly increasing. This can be ascribed to the intrinsic advantages of FlFFF for the separation of nano(bio)analytes. FlFFF is ideally suited to separate particles over a broad size range (1 nm-1 μm) according to their hydrodynamic radius (rh). The fractionation is carried out in an empty channel by a flow stream of a mobile phase of any composition. For these reasons, fractionation is developed without surface interaction of the analyte with packing or gel media, and there is no stationary phase able to induce mechanical or shear stress on nanosized analytes, which are for these reasons kept in their native state. Characterization of nano(bio)analytes is made possible after fractionation by interfacing the FlFFF system with detection techniques for morphological, optical or mass characterization. For instance, FlFFF coupling with multi-angle light scattering (MALS) detection allows for absolute molecular weight and size determination, and mass spectrometry has made FlFFF enter the field of proteomics. Potentialities of FlFFF couplings with multi-detection systems are discussed in the first section of this dissertation. The second and the third sections are dedicated to new methods that have been developed for the analysis and characterization of different samples of interest in the fields of diagnostics, pharmaceutics, and nanomedicine. The second section focuses on biological samples such as protein complexes and protein aggregates. In particular it focuses on FlFFF methods developed to give new insights into: a) chemical composition and morphological features of blood serum lipoprotein classes, b) time-dependent aggregation pattern of the amyloid protein Aβ1-42, and c) aggregation state of antibody therapeutics in their formulation buffers. The third section is dedicated to the analysis and characterization of structured nanoparticles designed for nanomedicine applications. The discussed results indicate that FlFFF with on-line MALS and fluorescence detection (FD) may become the unparallel methodology for the analysis and characterization of new, structured, fluorescent nanomaterials.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In dieser Arbeit wurden Fluorkohlenstoff-basierte und siliziumorganische Plasmapolymerfilme hergestellt und hinsichtlich ihrer strukturellen und funktionalen Eigenschaften untersucht. Beide untersuchten Materialsysteme sind in der Beschichtungstechnologie von großem wissenschaftlichen und anwendungstechnischen Interesse. Die Schichtabscheidung erfolgte mittels plasmachemischer Gasphasenabscheidung (PECVD) an Parallelplattenreaktoren. Bei den Untersuchungen zur Fluorkohlenstoff-Plasmapolymerisation stand die Herstellung ultra-dünner, d. h. weniger als 5 nm dicker Schichten im Vordergrund. Dies wurde durch gepulste Plasmaanregung und Verwendung eines Gasgemisches aus Trifluormethan (CHF3) und Argon realisiert. Die Bindungsstruktur der Schichten wurden in Abhängigkeit der eingespeisten Leistung, die den Fragmentationsgrad der Monomere im Plasma bestimmt, analysiert. Hierzu wurden die Röntgen-Photoelektronenspektroskopie (XPS), Rasterkraftmikroskopie (AFM), Flugzeit-Sekundärionenmassenspektrometrie (ToF-SIMS) und Röntgenreflektometrie (XRR) eingesetzt. Es zeigte sich, dass die abgeschiedenen Schichten ein homogenes Wachstumsverhalten und keine ausgeprägten Interfacebereiche zum Substrat und zur Oberfläche hin aufweisen. Die XPS-Analysen deuten darauf hin, dass Verkettungsreaktionen von CF2-Radikalen im Plasma eine wichtige Rolle für den Schichtbildungsprozess spielen. Weiterhin konnte gezeigt werden, dass der gewählte Beschichtungsprozess eine gezielte Reduzierung der Benetzbarkeit verschiedener Substrate ermöglicht. Dabei genügen Schichtdicken von weniger als 3 nm zur Erreichung eines teflonartigen Oberflächencharakters mit Oberflächenenergien um 20 mN/m. Damit erschließen sich neue Applikationsmöglichkeiten ultra-dünner Fluorkohlenstoffschichten, was anhand eines Beispiels aus dem Bereich der Nanooptik demonstriert wird. Für die siliziumorganischen Schichten unter Verwendung des Monomers Hexamethyldisiloxan (HMDSO) galt es zunächst, diejenigen Prozessparameter zu identifizieren, die ihren organischen bzw. glasartigen Charakter bestimmen. Hierzu wurde der Einfluss von Leistungseintrag und Zugabe von Sauerstoff als Reaktivgas auf die Elementzusammensetzung der Schichten untersucht. Bei niedrigen Plasmaleistungen und Sauerstoffflüssen werden vor allem kohlenstoffreiche Schichten abgeschieden, was auf eine geringere Fragmentierung der Kohlenwasserstoffgruppen zurückgeführt wurde. Es zeigte sich, dass die Variation des Sauerstoffanteils im Prozessgas eine sehr genaue Steuerbarkeit der Schichteigenschaften ermöglicht. Mittels Sekundär-Neutralteilchen-Massenspektrometrie (SNMS) konnte die prozesstechnische Realisierbarkeit und analytische Quantifizierbarkeit von Wechselschichtsystemen aus polymerartigen und glasartigen Lagen demonstriert werden. Aus dem Intensitätsverhältnis von Si:H-Molekülen zu Si-Atomen im SNMS-Spektrum ließ sich der Wasserstoffgehalt bestimmen. Weiterhin konnte gezeigt werden, dass durch Abscheidung von HMDSO-basierten Gradientenschichten eine deutliche Reduzierung von Reibung und Verschleiß bei Elastomerbauteilen erzielt werden kann.