787 resultados para Gradient-based approaches
Resumo:
Pós-graduação em Física - IFT
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Since the beginning, some pattern recognition techniques have faced the problem of high computational burden for dataset learning. Among the most widely used techniques, we may highlight Support Vector Machines (SVM), which have obtained very promising results for data classification. However, this classifier requires an expensive training phase, which is dominated by a parameter optimization that aims to make SVM less prone to errors over the training set. In this paper, we model the problem of finding such parameters as a metaheuristic-based optimization task, which is performed through Harmony Search (HS) and some of its variants. The experimental results have showen the robustness of HS-based approaches for such task in comparison against with an exhaustive (grid) search, and also a Particle Swarm Optimization-based implementation.
Resumo:
The study proposes a constrained least square (CLS) pre-distortion scheme for multiple-input single-output (MISO) multiple access ultra-wideband (UWB) systems. In such a scheme, a simple objective function is defined, which can be efficiently solved by a gradient-based algorithm. For the performance evaluation, scenarios CM1 and CM3 of the IEEE 802.15.3a channel model are considered. Results show that the CLS algorithm has a fast convergence and a good trade-off between intersymbol interference (ISI) and multiple access interference (MAI) reduction and signal-to-noise ratio (SNR) preservation, performing better than time-reversal (TR) pre-distortion.
Resumo:
Visual analysis of social networks is usually based on graph drawing algorithms and tools. However, social networks are a special kind of graph in the sense that interpretation of displayed relationships is heavily dependent on context. Context, in its turn, is given by attributes associated with graph elements, such as individual nodes, edges, and groups of edges, as well as by the nature of the connections between individuals. In most systems, attributes of individuals and communities are not taken into consideration during graph layout, except to derive weights for force-based placement strategies. This paper proposes a set of novel tools for displaying and exploring social networks based on attribute and connectivity mappings. These properties are employed to layout nodes on the plane via multidimensional projection techniques. For the attribute mapping, we show that node proximity in the layout corresponds to similarity in attribute, leading to easiness in locating similar groups of nodes. The projection based on connectivity yields an initial placement that forgoes force-based or graph analysis algorithm, reaching a meaningful layout in one pass. When a force algorithm is then applied to this initial mapping, the final layout presents better properties than conventional force-based approaches. Numerical evaluations show a number of advantages of pre-mapping points via projections. User evaluation demonstrates that these tools promote ease of manipulation as well as fast identification of concepts and associations which cannot be easily expressed by conventional graph visualization alone. In order to allow better space usage for complex networks, a graph mapping on the surface of a sphere is also implemented.
Resumo:
Nell’ambito dell’analisi computazionale delle strutture il metodo degli elementi finiti è probabilmente uno dei metodi numerici più efficaci ed impiegati. La semplicità dell’idea di base del metodo e la relativa facilità con cui può essere implementato in codici di calcolo hanno reso possibile l’applicazione di questa tecnica computazionale in diversi settori, non solo dell’ingegneria strutturale, ma in generale della matematica applicata. Ma, nonostante il livello raggiunto dalle tecnologie ad elementi finiti sia già abbastanza elevato, per alcune applicazioni tipiche dell’ingegneria strutturale (problemi bidimensionali, analisi di lastre inflesse) le prestazioni fornite dagli elementi usualmente utilizzati, ovvero gli elementi di tipo compatibile, sono in effetti poco soddisfacenti. Vengono in aiuto perciò gli elementi finiti basati su formulazioni miste che da un lato presentano una più complessa formulazione, ma dall’altro consentono di prevenire alcuni problemi ricorrenti quali per esempio il fenomeno dello shear locking. Indipendentemente dai tipi di elementi finiti utilizzati, le quantità di interesse nell’ambito dell’ingegneria non sono gli spostamenti ma gli sforzi o più in generale le quantità derivate dagli spostamenti. Mentre i primi sono molto accurati, i secondi risultano discontinui e di qualità scadente. A valle di un calcolo FEM, negli ultimi anni, hanno preso piede procedure di post-processing in grado, partendo dalla soluzione agli elementi finiti, di ricostruire lo sforzo all’interno di patch di elementi rendendo quest’ultimo più accurato. Tali procedure prendono il nome di Procedure di Ricostruzione (Recovery Based Approaches). Le procedure di ricostruzione qui utilizzate risultano essere la REP (Recovery by Equilibrium in Patches) e la RCP (Recovery by Compatibility in Patches). L’obbiettivo che ci si prefigge in questo lavoro è quello di applicare le procedure di ricostruzione ad un esempio di piastra, discretizzato con vari tipi di elementi finiti, mettendone in luce i vantaggi in termini di migliore accurattezza e di maggiore convergenza.
Resumo:
Facial expression recognition is one of the most challenging research areas in the image recognition ¯eld and has been actively studied since the 70's. For instance, smile recognition has been studied due to the fact that it is considered an important facial expression in human communication, it is therefore likely useful for human–machine interaction. Moreover, if a smile can be detected and also its intensity estimated, it will raise the possibility of new applications in the future
Resumo:
In the last decade, the reverse vaccinology approach shifted the paradigm of vaccine discovery from conventional culture-based methods to high-throughput genome-based approaches for the development of recombinant protein-based vaccines against pathogenic bacteria. Besides reaching its main goal of identifying new vaccine candidates, this new procedure produced also a huge amount of molecular knowledge related to them. In the present work, we explored this knowledge in a species-independent way and we performed a systematic in silico molecular analysis of more than 100 protective antigens, looking at their sequence similarity, domain composition and protein architecture in order to identify possible common molecular features. This meta-analysis revealed that, beside a low sequence similarity, most of the known bacterial protective antigens shared structural/functional Pfam domains as well as specific protein architectures. Based on this, we formulated the hypothesis that the occurrence of these molecular signatures can be predictive of possible protective properties of other proteins in different bacterial species. We tested this hypothesis in Streptococcus agalactiae and identified four new protective antigens. Moreover, in order to provide a second proof of the concept for our approach, we used Staphyloccus aureus as a second pathogen and identified five new protective antigens. This new knowledge-driven selection process, named MetaVaccinology, represents the first in silico vaccine discovery tool based on conserved and predictive molecular and structural features of bacterial protective antigens and not dependent upon the prediction of their sub-cellular localization.
Resumo:
This thesis deals with distributed control strategies for cooperative control of multi-robot systems. Specifically, distributed coordination strategies are presented for groups of mobile robots. The formation control problem is initially solved exploiting artificial potential fields. The purpose of the presented formation control algorithm is to drive a group of mobile robots to create a completely arbitrarily shaped formation. Robots are initially controlled to create a regular polygon formation. A bijective coordinate transformation is then exploited to extend the scope of this strategy, to obtain arbitrarily shaped formations. For this purpose, artificial potential fields are specifically designed, and robots are driven to follow their negative gradient. Artificial potential fields are then subsequently exploited to solve the coordinated path tracking problem, thus making the robots autonomously spread along predefined paths, and move along them in a coordinated way. Formation control problem is then solved exploiting a consensus based approach. Specifically, weighted graphs are used both to define the desired formation, and to implement collision avoidance. As expected for consensus based algorithms, this control strategy is experimentally shown to be robust to the presence of communication delays. The global connectivity maintenance issue is then considered. Specifically, an estimation procedure is introduced to allow each agent to compute its own estimate of the algebraic connectivity of the communication graph, in a distributed manner. This estimate is then exploited to develop a gradient based control strategy that ensures that the communication graph remains connected, as the system evolves. The proposed control strategy is developed initially for single-integrator kinematic agents, and is then extended to Lagrangian dynamical systems.
Resumo:
Im Rahmen der vorliegenden Dissertation wurde die phylogenetischen Stellungen der Xenoturbellida (Deuterostomia) und der Syndermata (Protostomia) mit phylogenomischen Techniken untersucht. Auf methodischer Ebene konnte gezeigt werden, dass ribosomale Proteine aufgrund ihres mittleren bis hohen Konservierungsgrades, ihrer Häufigkeit in kleineren EST-Projekten, damit verbunden ihrer Häufigkeit in Datenbanken und ihres phylogenetischen Informationsgehalts nützliche Werkzeuge für phylogenetische Fragestellungen sind. Es konnte durch phylogenetische Rekonstruktionen und Hypothesentests auf Basis eines 11.912 Aminosäuren langen Datensatzes gezeigt werden, dass die Xenoturbellida innerhalb der Deuterostomia eine Schwestergruppenbeziehung zu den Ambulacraria eingehen. Diese Arbeit zeigt im Vergleich aller bisher durchgeführten Arbeiten die beste statistische Unterstützung für diese Topologie. Weiterhin konnte untermauert werden, dass die Urochordata vermutlich anstelle der Cephalochordata die Schwestergruppe der Vertebrata sind. Der Vergleich der publizierten Xenoturbella EST-Datensätze mit dem eigenen Datensatz ließ den Rückschluß zu, dass ESTs offenbar klar weniger anfällig gegen Kontaminationen mit Erbmaterial (DNA+RNA) anderer Spezies sind als PCR-Amplifikate genomischer oder mitochondrialer Gene. Allerdings bestimmt anscheinend der physiologische Zustand der Tiere die Repräsentation von Transkriptklassen wie Stressproteine und mitochondriale Transkripte. Die bakteriellen Transkripte in einem der EST-Datensätze stammen vermutlich von Chlamydien, die möglicherweise symbiontisch in Xenoturbella bocki leben. Im Bereich der Protostomia wurden drei EST-Projekte für Vertreter der Syndermata durchgeführt. Basierend auf drei verschiedenen Proteinalignment-Datensätzen von ca. 11.000 Aminosäuren Länge konnte gezeigt werden, dass die Syndermata innerhalb der Spiralia einzugruppieren sind und dass sie mit den Gnathostomulida das monophyletische Supertaxon Gnathifera bilden. Die genaue phylogenetische Position der Syndermata innerhalb der Spiralia konnte hingegen noch nicht eindeutig geklärt werden, ebenso wie kein kongruenter Beweis für die Existenz des Supertaxons Platyzoa gefunden werden konnte. Im Rahmen der Untersuchung der internen Phylogenie der Syndermata konnten drei der fünf konkurrierenden Hypothesen aufgrund der Paraphylie der Eurotatoria ausgeschlossen werden. Da keine Daten der Seisonidea in den Analysen implementiert waren, bleibt die Frage der internen Phylogenie der Syndermata letztlich offen. Klar ist jedoch, dass die Eurotatoria nicht wie bislang angenommen monophyletisch sind, da die räderorgantragenden Bdelloidea keinesfalls den morphologisch diesbezüglich ähnlichen Monogononta ähnlich sind, sondern den räderorganlosen Acanthocephala näher stehen. Die Abbildung der molekularen Phylogenie auf die morphologischen Verhältnisse zeigt, dass das Räderorgan (partiell oder komplett) offenbar kurz nach der Aufspaltung der Syndermata in Monogononta und Acanthocephala + Bdelloidea in der Acanthocephala + Bdelloidea-Linie reduziert wurde. Die Entstehung des einziehbaren hinteren Körperteils (Rostrum bei Bdelloidea bzw. Proboscis bei Acanthocephala) in der Acanthocephala + Bdelloidea-Linie könnte das Schlüsselereignis zur Entstehung des Endoparasitismus der Acanthocephala gewesen sein.
Resumo:
Data Distribution Management (DDM) is a core part of High Level Architecture standard, as its goal is to optimize the resources used by simulation environments to exchange data. It has to filter and match the set of information generated during a simulation, so that each federate, that is a simulation entity, only receives the information it needs. It is important that this is done quickly and to the best in order to get better performances and avoiding the transmission of irrelevant data, otherwise network resources may saturate quickly. The main topic of this thesis is the implementation of a super partes DDM testbed. It evaluates the goodness of DDM approaches, of all kinds. In fact it supports both region and grid based approaches, and it may support other different methods still unknown too. It uses three factors to rank them: execution time, memory and distance from the optimal solution. A prearranged set of instances is already available, but we also allow the creation of instances with user-provided parameters. This is how this thesis is structured. We start introducing what DDM and HLA are and what do they do in details. Then in the first chapter we describe the state of the art, providing an overview of the most well known resolution approaches and the pseudocode of the most interesting ones. The third chapter describes how the testbed we implemented is structured. In the fourth chapter we expose and compare the results we got from the execution of four approaches we have implemented. The result of the work described in this thesis can be downloaded on sourceforge using the following link: https://sourceforge.net/projects/ddmtestbed/. It is licensed under the GNU General Public License version 3.0 (GPLv3).
Resumo:
We obtain the exact time-dependent Kohn-Sham potentials Vks for 1D Hubbard chains, driven by a d.c. external field, using the time-dependent electron density and current density obtained from exact many-body time-evolution. The exact Vxc is compared to the adiabatically-exact Vad-xc and the “instantaneous ground state” Vigs-xc. The effectiveness of these two approximations is analyzed. Approximations for the exchange-correlation potential Vxc and its gradient, based on the local density and on the local current density, are also considered and both physical quantities are observed to be far outside the reach of any possible local approximation. Insight into the respective roles of ground-state and excited-state correlation in the time-dependent system, as reflected in the potentials, is provided by the pair correlation function.
Resumo:
This thesis addresses the issue of generating texts in the style of an existing author, that also satisfy structural constraints imposed by the genre of the text. Although Markov processes are known to be suitable for representing style, they are difficult to control in order to satisfy non-local properties, such as structural constraints, that require long distance modeling. The framework of Constrained Markov Processes allows to precisely generate texts that are consistent with a corpus, while being controllable in terms of rhymes and meter. Controlled Markov processes consist in reformulating Markov processes in the context of constraint satisfaction. The thesis describes how to represent stylistic and structural properties in terms of constraints in this framework and how this approach can be used for the generation of lyrics in the style of 60 differents authors An evaluation of the desctibed method is provided by comparing it to both pure Markov and pure constraint-based approaches. Finally the thesis describes the implementation of an augmented text editor, called Perec. Perec is intended to improve creativity, by helping the user to write lyrics and poetry, exploiting the techniques presented so far.
Resumo:
In der vorliegenden Arbeit werden verschiedene Wassermodelle in sogenannten Multiskalen-Computersimulationen mit zwei Auflösungen untersucht, in atomistischer Auflösung und in einer vergröberten Auflösung, die als "coarse-grained" bezeichnet wird. In der atomistischen Auflösung wird ein Wassermolekül, entsprechend seiner chemischen Struktur, durch drei Atome beschrieben, im Gegensatz dazu wird ein Molekül in der coarse-grained Auflösung durch eine Kugel dargestellt.rnrnDie coarse-grained Modelle, die in dieser Arbeit vorgestellt werden, werden mit verschiedenen coarse-graining Methoden entwickelt. Hierbei kommen hauptsächlich die "iterative Boltzmann Inversion" und die "iterative Monte Carlo Inversion" zum Einsatz. Beides sind struktur-basierte Ansätze, die darauf abzielen bestimmte strukturelle Eigenschaften, wie etwa die Paarverteilungsfunktionen, des zugrundeliegenden atomistischen Systems zu reproduzieren. Zur automatisierten Anwendung dieser Methoden wurde das Softwarepaket "Versatile Object-oriented Toolkit for Coarse-Graining Applications" (VOTCA) entwickelt.rnrnEs wird untersucht, in welchem Maße coarse-grained Modelle mehrere Eigenschaftenrndes zugrundeliegenden atomistischen Modells gleichzeitig reproduzieren können, z.B. thermodynamische Eigenschaften wie Druck und Kompressibilität oder strukturelle Eigenschaften, die nicht zur Modellbildung verwendet wurden, z.B. das tetraedrische Packungsverhalten, welches für viele spezielle Eigenschaft von Wasser verantwortlich ist.rnrnMit Hilfe des "Adaptive Resolution Schemes" werden beide Auflösungen in einer Simulation kombiniert. Dabei profitiert man von den Vorteilen beider Modelle:rnVon der detaillierten Darstellung eines räumlich kleinen Bereichs in atomistischer Auflösung und von der rechnerischen Effizienz des coarse-grained Modells, die den Bereich simulierbarer Zeit- und Längenskalen vergrössert.rnrnIn diesen Simulationen kann der Einfluss des Wasserstoffbrückenbindungsnetzwerks auf die Hydration von Fullerenen untersucht werden. Es zeigt sich, dass die Struktur der Wassermoleküle an der Oberfläche hauptsächlich von der Art der Wechselwirkung zwischen dem Fulleren und Wasser und weniger von dem Wasserstoffbrückenbindungsnetzwerk dominiert wird.rn
Resumo:
Decomposition based approaches are recalled from primal and dual point of view. The possibility of building partially disaggregated reduced master problems is investigated. This extends the idea of aggregated-versus-disaggregated formulation to a gradual choice of alternative level of aggregation. Partial aggregation is applied to the linear multicommodity minimum cost flow problem. The possibility of having only partially aggregated bundles opens a wide range of alternatives with different trade-offs between the number of iterations and the required computation for solving it. This trade-off is explored for several sets of instances and the results are compared with the ones obtained by directly solving the natural node-arc formulation. An iterative solution process to the route assignment problem is proposed, based on the well-known Frank Wolfe algorithm. In order to provide a first feasible solution to the Frank Wolfe algorithm, a linear multicommodity min-cost flow problem is solved to optimality by using the decomposition techniques mentioned above. Solutions of this problem are useful for network orientation and design, especially in relation with public transportation systems as the Personal Rapid Transit. A single-commodity robust network design problem is addressed. In this, an undirected graph with edge costs is given together with a discrete set of balance matrices, representing different supply/demand scenarios. The goal is to determine the minimum cost installation of capacities on the edges such that the flow exchange is feasible for every scenario. A set of new instances that are computationally hard for the natural flow formulation are solved by means of a new heuristic algorithm. Finally, an efficient decomposition-based heuristic approach for a large scale stochastic unit commitment problem is presented. The addressed real-world stochastic problem employs at its core a deterministic unit commitment planning model developed by the California Independent System Operator (ISO).