14 resultados para Boolean Computations
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
Uno dei principali ambiti di ricerca dell’intelligenza artificiale concerne la realizzazione di agenti (in particolare, robot) in grado di aiutare o sostituire l’uomo nell’esecuzione di determinate attività. A tal fine, è possibile procedere seguendo due diversi metodi di progettazione: la progettazione manuale e la progettazione automatica. Quest’ultima può essere preferita alla prima nei contesti in cui occorra tenere in considerazione requisiti quali flessibilità e adattamento, spesso essenziali per lo svolgimento di compiti non banali in contesti reali. La progettazione automatica prende in considerazione un modello col quale rappresentare il comportamento dell’agente e una tecnica di ricerca (oppure di apprendimento) che iterativamente modifica il modello al fine di renderlo il più adatto possibile al compito in esame. In questo lavoro, il modello utilizzato per la rappresentazione del comportamento del robot è una rete booleana (Boolean network o Kauffman network). La scelta di tale modello deriva dal fatto che possiede una semplice struttura che rende agevolmente studiabili le dinamiche tuttavia complesse che si manifestano al suo interno. Inoltre, la letteratura recente mostra che i modelli a rete, quali ad esempio le reti neuronali artificiali, si sono dimostrati efficaci nella programmazione di robot. La metodologia per l’evoluzione di tale modello riguarda l’uso di tecniche di ricerca meta-euristiche in grado di trovare buone soluzioni in tempi contenuti, nonostante i grandi spazi di ricerca. Lavori precedenti hanno gia dimostrato l’applicabilità e investigato la metodologia su un singolo robot. Lo scopo di questo lavoro è quello di fornire prova di principio relativa a un insieme di robot, aprendo nuove strade per la progettazione in swarm robotics. In questo scenario, semplici agenti autonomi, interagendo fra loro, portano all’emergere di un comportamento coordinato adempiendo a task impossibili per la singola unità. Questo lavoro fornisce utili ed interessanti opportunità anche per lo studio delle interazioni fra reti booleane. Infatti, ogni robot è controllato da una rete booleana che determina l’output in funzione della propria configurazione interna ma anche dagli input ricevuti dai robot vicini. In questo lavoro definiamo un task in cui lo swarm deve discriminare due diversi pattern sul pavimento dell’arena utilizzando solo informazioni scambiate localmente. Dopo una prima serie di esperimenti preliminari che hanno permesso di identificare i parametri e il migliore algoritmo di ricerca, abbiamo semplificato l’istanza del problema per meglio investigare i criteri che possono influire sulle prestazioni. E’ stata così identificata una particolare combinazione di informazione che, scambiata localmente fra robot, porta al miglioramento delle prestazioni. L’ipotesi è stata confermata applicando successivamente questo risultato ad un’istanza più difficile del problema. Il lavoro si conclude suggerendo nuovi strumenti per lo studio dei fenomeni emergenti in contesti in cui le reti booleane interagiscono fra loro.
Resumo:
Real living cell is a complex system governed by many process which are not yet fully understood: the process of cell differentiation is one of these. In this thesis work we make use of a cell differentiation model to develop gene regulatory networks (Boolean networks) with desired differentiation dynamics. To accomplish this task we have introduced techniques of automatic design and we have performed experiments using various differentiation trees. The results obtained have shown that the developed algorithms, except the Random algorithm, are able to generate Boolean networks with interesting differentiation dynamics. Moreover, we have presented some possible future applications and developments of the cell differentiation model in robotics and in medical research. Understanding the mechanisms involved in biological cells can gives us the possibility to explain some not yet understood dangerous disease, i.e the cancer. Le cellula è un sistema complesso governato da molti processi ancora non pienamente compresi: il differenziamento cellulare è uno di questi. In questa tesi utilizziamo un modello di differenziamento cellulare per sviluppare reti di regolazione genica (reti Booleane) con dinamiche di differenziamento desiderate. Per svolgere questo compito abbiamo introdotto tecniche di progettazione automatica e abbiamo eseguito esperimenti utilizzando vari alberi di differenziamento. I risultati ottenuti hanno mostrato che gli algoritmi sviluppati, eccetto l'algoritmo Random, sono in grado di poter generare reti Booleane con dinamiche di differenziamento interessanti. Inoltre, abbiamo presentato alcune possibili applicazioni e sviluppi futuri del modello di differenziamento in robotica e nella ricerca medica. Capire i meccanismi alla base del funzionamento cellulare può fornirci la possibilità di spiegare patologie ancora oggi non comprese, come il cancro.
Resumo:
The idea of Grid Computing originated in the nineties and found its concrete applications in contexts like the SETI@home project where a lot of computers (offered by volunteers) cooperated, performing distributed computations, inside the Grid environment analyzing radio signals trying to find extraterrestrial life. The Grid was composed of traditional personal computers but, with the emergence of the first mobile devices like Personal Digital Assistants (PDAs), researchers started theorizing the inclusion of mobile devices into Grid Computing; although impressive theoretical work was done, the idea was discarded due to the limitations (mainly technological) of mobile devices available at the time. Decades have passed, and now mobile devices are extremely more performant and numerous than before, leaving a great amount of resources available on mobile devices, such as smartphones and tablets, untapped. Here we propose a solution for performing distributed computations over a Grid Computing environment that utilizes both desktop and mobile devices, exploiting the resources from day-to-day mobile users that alternatively would end up unused. The work starts with an introduction on what Grid Computing is, the evolution of mobile devices, the idea of integrating such devices into the Grid and how to convince device owners to participate in the Grid. Then, the tone becomes more technical, starting with an explanation on how Grid Computing actually works, followed by the technical challenges of integrating mobile devices into the Grid. Next, the model, which constitutes the solution offered by this study, is explained, followed by a chapter regarding the realization of a prototype that proves the feasibility of distributed computations over a Grid composed by both mobile and desktop devices. To conclude future developments and ideas to improve this project are presented.
Resumo:
The main goal of this thesis is to understand and link together some of the early works by Michel Rumin and Pierre Julg. The work is centered around the so-called Rumin complex, which is a construction in subRiemannian geometry. A Carnot manifold is a manifold endowed with a horizontal distribution. If further a metric is given, one gets a subRiemannian manifold. Such data arise in different contexts, such as: - formulation of the second principle of thermodynamics; - optimal control; - propagation of singularities for sums of squares of vector fields; - real hypersurfaces in complex manifolds; - ideal boundaries of rank one symmetric spaces; - asymptotic geometry of nilpotent groups; - modelization of human vision. Differential forms on a Carnot manifold have weights, which produces a filtered complex. In view of applications to nilpotent groups, Rumin has defined a substitute for the de Rham complex, adapted to this filtration. The presence of a filtered complex also suggests the use of the formal machinery of spectral sequences in the study of cohomology. The goal was indeed to understand the link between Rumin's operator and the differentials which appear in the various spectral sequences we have worked with: - the weight spectral sequence; - a special spectral sequence introduced by Julg and called by him Forman's spectral sequence; - Forman's spectral sequence (which turns out to be unrelated to the previous one). We will see that in general Rumin's operator depends on choices. However, in some special cases, it does not because it has an alternative interpretation as a differential in a natural spectral sequence. After defining Carnot groups and analysing their main properties, we will introduce the concept of weights of forms which will produce a splitting on the exterior differential operator d. We shall see how the Rumin complex arises from this splitting and proceed to carry out the complete computations in some key examples. From the third chapter onwards we will focus on Julg's paper, describing his new filtration and its relationship with the weight spectral sequence. We will study the connection between the spectral sequences and Rumin's complex in the n-dimensional Heisenberg group and the 7-dimensional quaternionic Heisenberg group and then generalize the result to Carnot groups using the weight filtration. Finally, we shall explain why Julg required the independence of choices in some special Rumin operators, introducing the Szego map and describing its main properties.
Resumo:
Automatic design has become a common approach to evolve complex networks, such as artificial neural networks (ANNs) and random boolean networks (RBNs), and many evolutionary setups have been discussed to increase the efficiency of this process. However networks evolved in this way have few limitations that should not be overlooked. One of these limitations is the black-box problem that refers to the impossibility to analyze internal behaviour of complex networks in an efficient and meaningful way. The aim of this study is to develop a methodology that make it possible to extract finite-state automata (FSAs) descriptions of robot behaviours from the dynamics of automatically designed complex controller networks. These FSAs unlike complex networks from which they're extracted are both readable and editable thus making the resulting designs much more valuable.
Resumo:
After almost 10 years from “The Free Lunch Is Over” article, where the need to parallelize programs started to be a real and mainstream issue, a lot of stuffs did happened: • Processor manufacturers are reaching the physical limits with most of their approaches to boosting CPU performance, and are instead turning to hyperthreading and multicore architectures; • Applications are increasingly need to support concurrency; • Programming languages and systems are increasingly forced to deal well with concurrency. This thesis is an attempt to propose an overview of a paradigm that aims to properly abstract the problem of propagating data changes: Reactive Programming (RP). This paradigm proposes an asynchronous non-blocking approach to concurrency and computations, abstracting from the low-level concurrency mechanisms.
Resumo:
The first part of this essay aims at investigating the already available and promising technologies for the biogas and bio-hydrogen production from anaerobic digestion of different organic substrates. One strives to show all the peculiarities of this complicate process, such as continuity, number of stages, moisture, biomass preservation and rate of feeding. The main outcome of this part is the awareness of the huge amount of reactor configurations, each of which suitable for a few types of substrate and circumstance. Among the most remarkable results, one may consider first of all the wet continuous stirred tank reactors (CSTR), right to face the high waste production rate in urbanised and industrialised areas. Then, there is the up-flow anaerobic sludge blanket reactor (UASB), aimed at the biomass preservation in case of highly heterogeneous feedstock, which can also be treated in a wise co-digestion scheme. On the other hand, smaller and scattered rural realities can be served by either wet low-rate digesters for homogeneous agricultural by-products (e.g. fixed-dome) or the cheap dry batch reactors for lignocellulose waste and energy crops (e.g. hybrid batch-UASB). The biological and technical aspects raised during the first chapters are later supported with bibliographic research on the important and multifarious large-scale applications the products of the anaerobic digestion may have. After the upgrading techniques, particular care was devoted to their importance as biofuels, highlighting a further and more flexible solution consisting in the reforming to syngas. Then, one shows the electricity generation and the associated heat conversion, stressing on the high potential of fuel cells (FC) as electricity converters. Last but not least, both the use as vehicle fuel and the injection into the gas pipes are considered as promising applications. The consideration of the still important issues of the bio-hydrogen management (e.g. storage and delivery) may lead to the conclusion that it would be far more challenging to implement than bio-methane, which can potentially “inherit” the assets of the similar fossil natural gas. Thanks to the gathered knowledge, one devotes a chapter to the energetic and financial study of a hybrid power system supplied by biogas and made of different pieces of equipment (natural gas thermocatalitic unit, molten carbonate fuel cell and combined-cycle gas turbine structure). A parallel analysis on a bio-methane-fed CCGT system is carried out in order to compare the two solutions. Both studies show that the apparent inconvenience of the hybrid system actually emphasises the importance of extending the computations to a broader reality, i.e. the upstream processes for the biofuel production and the environmental/social drawbacks due to fossil-derived emissions. Thanks to this “boundary widening”, one can realise the hidden benefits of the hybrid over the CCGT system.
Resumo:
Our generation of computational scientists is living in an exciting time: not only do we get to pioneer important algorithms and computations, we also get to set standards on how computational research should be conducted and published. From Euclid’s reasoning and Galileo’s experiments, it took hundreds of years for the theoretical and experimental branches of science to develop standards for publication and peer review. Computational science, rightly regarded as the third branch, can walk the same road much faster. The success and credibility of science are anchored in the willingness of scientists to expose their ideas and results to independent testing and replication by other scientists. This requires the complete and open exchange of data, procedures and materials. The idea of a “replication by other scientists” in reference to computations is more commonly known as “reproducible research”. In this context the journal “EAI Endorsed Transactions on Performance & Modeling, Simulation, Experimentation and Complex Systems” had the exciting and original idea to make the scientist able to submit simultaneously the article and the computation materials (software, data, etc..) which has been used to produce the contents of the article. The goal of this procedure is to allow the scientific community to verify the content of the paper, reproducing it in the platform independently from the OS chosen, confirm or invalidate it and especially allow its reuse to reproduce new results. This procedure is therefore not helpful if there is no minimum methodological support. In fact, the raw data sets and the software are difficult to exploit without the logic that guided their use or their production. This led us to think that in addition to the data sets and the software, an additional element must be provided: the workflow that relies all of them.
Resumo:
L'applicazione di misure, derivanti dalla teoria dell'informazione, fornisce un valido strumento per quantificare alcune delle proprietà dei sistemi complessi. Le stesse misure possono essere utilizzate in robotica per favorire l'analisi e la sintesi di sistemi di controllo per robot. In questa tesi si è analizzata la correlazione tra alcune misure di complessità e la capacità dei robot di portare a termine, con successo, tre differenti task. I risultati ottenuti suggeriscono che tali misure di complessità rappresentano uno strumento promettente anche nel campo della robotica, ma che il loro utilizzo può diventare difficoltoso quando applicate a task compositi.
Resumo:
The benzoquinone was found as an effective co-catalyst in the ruthenium/NaOEt-catalyzed Guerbet reaction. The co-catalyst behavior has therefore been investigated through experimental and computational methods. The reaction products distribution shows that the reaction speed is improved by the benzoquinone supplement since the beginning of the process, having a minimal effect on the selectivity toward alcoholic species. DFT calculations were performed to investigate two hypotheses for the kinetic effects: i) a hydrogen storage mechanism or ii) a basic co-catalysis of 4-hydroxiphenolate. The most promising results were found for the latter hypothesis, where a new mixed mechanism for the aldol condensation step of the Guerbet process involves the hydroquinone (i.e. the reduced form of benzoquinone) as proton source instead of ethanol. This mechanism was found to be energetically more favorable than an aldol condensation in absence of additive, suggesting that the hydroquinone derived from benzoquinone could be the key species affecting the kinetics of the overall process. To verify this theoretical hypothesis, new phenol derivatives were tested as additives in the Guerbet reaction. The outcomes confirmed that an aromatic acid (stronger than ethanol) could improve the reaction kinetics. Lastly, theoretical products distributions were simulated and compared to the experimental one, using the DFT computations to build the kinetic models.
Resumo:
Combinatorial decision and optimization problems belong to numerous applications, such as logistics and scheduling, and can be solved with various approaches. Boolean Satisfiability and Constraint Programming solvers are some of the most used ones and their performance is significantly influenced by the model chosen to represent a given problem. This has led to the study of model reformulation methods, one of which is tabulation, that consists in rewriting the expression of a constraint in terms of a table constraint. To apply it, one should identify which constraints can help and which can hinder the solving process. So far this has been performed by hand, for example in MiniZinc, or automatically with manually designed heuristics, in Savile Row. Though, it has been shown that the performances of these heuristics differ across problems and solvers, in some cases helping and in others hindering the solving procedure. However, recent works in the field of combinatorial optimization have shown that Machine Learning (ML) can be increasingly useful in the model reformulation steps. This thesis aims to design a ML approach to identify the instances for which Savile Row’s heuristics should be activated. Additionally, it is possible that the heuristics miss some good tabulation opportunities, so we perform an exploratory analysis for the creation of a ML classifier able to predict whether or not a constraint should be tabulated. The results reached towards the first goal show that a random forest classifier leads to an increase in the performances of 4 different solvers. The experimental results in the second task show that a ML approach could improve the performance of a solver for some problem classes.
Resumo:
Miniaturized flying robotic platforms, called nano-drones, have the potential to revolutionize the autonomous robots industry sector thanks to their very small form factor. The nano-drones’ limited payload only allows for a sub-100mW microcontroller unit for the on-board computations. Therefore, traditional computer vision and control algorithms are too computationally expensive to be executed on board these palm-sized robots, and we are forced to rely on artificial intelligence to trade off accuracy in favor of lightweight pipelines for autonomous tasks. However, relying on deep learning exposes us to the problem of generalization since the deployment scenario of a convolutional neural network (CNN) is often composed by different visual cues and different features from those learned during training, leading to poor inference performances. Our objective is to develop and deploy and adaptation algorithm, based on the concept of latent replays, that would allow us to fine-tune a CNN to work in new and diverse deployment scenarios. To do so we start from an existing model for visual human pose estimation, called PULPFrontnet, which is used to identify the pose of a human subject in space through its 4 output variables, and we present the design of our novel adaptation algorithm, which features automatic data gathering and labeling and on-device deployment. We therefore showcase the ability of our algorithm to adapt PULP-Frontnet to new deployment scenarios, improving the R2 scores of the four network outputs, with respect to an unknown environment, from approximately [−0.2, 0.4, 0.0,−0.7] to [0.25, 0.45, 0.2, 0.1]. Finally we demonstrate how it is possible to fine-tune our neural network in real time (i.e., under 76 seconds), using the target parallel ultra-low power GAP 8 System-on-Chip on board the nano-drone, and we show how all adaptation operations can take place using less than 2mWh of energy, a small fraction of the available battery power.