799 resultados para Tuning algorithm
Resumo:
The hierarchical organisation of biological systems plays a crucial role in the pattern formation of gene expression resulting from the morphogenetic processes, where autonomous internal dynamics of cells, as well as cell-to-cell interactions through membranes, are responsible for the emergent peculiar structures of the individual phenotype. Being able to reproduce the systems dynamics at different levels of such a hierarchy might be very useful for studying such a complex phenomenon of self-organisation. The idea is to model the phenomenon in terms of a large and dynamic network of compartments, where the interplay between inter-compartment and intra-compartment events determines the emergent behaviour resulting in the formation of spatial patterns. According to these premises the thesis proposes a review of the different approaches already developed in modelling developmental biology problems, as well as the main models and infrastructures available in literature for modelling biological systems, analysing their capabilities in tackling multi-compartment / multi-level models. The thesis then introduces a practical framework, MS-BioNET, for modelling and simulating these scenarios exploiting the potential of multi-level dynamics. This is based on (i) a computational model featuring networks of compartments and an enhanced model of chemical reaction addressing molecule transfer, (ii) a logic-oriented language to flexibly specify complex simulation scenarios, and (iii) a simulation engine based on the many-species/many-channels optimised version of Gillespie’s direct method. The thesis finally proposes the adoption of the agent-based model as an approach capable of capture multi-level dynamics. To overcome the problem of parameter tuning in the model, the simulators are supplied with a module for parameter optimisation. The task is defined as an optimisation problem over the parameter space in which the objective function to be minimised is the distance between the output of the simulator and a target one. The problem is tackled with a metaheuristic algorithm. As an example of application of the MS-BioNET framework and of the agent-based model, a model of the first stages of Drosophila Melanogaster development is realised. The model goal is to generate the early spatial pattern of gap gene expression. The correctness of the models is shown comparing the simulation results with real data of gene expression with spatial and temporal resolution, acquired in free on-line sources.
Resumo:
In the course of this work the effect of metal substitution on the structural and magnetic properties of the double perovskites Sr2MM’O6 (M = Fe, substituted by Cr, Zn and Ga; M’ = Re, substituted by Sb) was explored by means of X-ray diffraction, magnetic measurements, band structure calculations, Mößbauer spectroscopy and conductivity measurements. The focus of this study was the determination of (i) the kind and structural boundary conditions of the magnetic interaction between the M and M’ cations and (ii) the conditions for the principal application of double perovskites as spintronic materials by means of the band model approach. Strong correlations between the electronic, structural and magnetic properties have been found during the study of the double perovskites Sr2Fe1-xMxReO6 (0 < x < 1, M = Zn, Cr). The interplay between van Hove-singularity and Fermi level plays a crucial role for the magnetic properties. Substitution of Fe by Cr in Sr2FeReO6 leads to a non-monotonic behaviour of the saturation magnetization (MS) and an enhancement for substitution levels up to 10 %. The Curie temperatures (TC) monotonically increase from 401 to 616 K. In contrast, Zn substitution leads to a continuous decrease of MS and TC. The diamagnetic dilution of the Fe-sublattice by Zn leads to a transition from an itinerant ferrimagnetic to a localized ferromagnetic material. Thus, Zn substitution inhibits the long-range ferromagnetic interaction within the Fe-sublattice and preserves the long-range ferromagnetic interaction within the Re-sublattice. Superimposed on the electronic effects is the structural influence which can be explained by size effects modelled by the tolerance factor t. In the case of Cr substitution, a tetragonal – cubic transformation for x > 0.4 is observed. For Zn substituted samples the tetragonal distortion linearly increases with increasing Zn content. In order to elucidate the nature of the magnetic interaction between the M and M’ cations, Fe and Re were substituted by the valence invariant main group metals Ga and Sb, respectively. X-ray diffraction reveals Sr2FeRe1-xSbxO6 (0 < x < 0.9) to crystallize without antisite disorder in the tetragonal distorted perovskite structure (space group I4/mmm). The ferrimagnetic behaviour of the parent compound Sr2FeReO6 changes to antiferromagnetic upon Sb substitution as determined by magnetic susceptibility measurements. Samples up to a doping level of 0.3 are ferrimagnetic, while Sb contents higher than 0.6 result in an overall antiferromagnetic behaviour. 57Fe Mößbauer results show a coexistence of ferri- and antiferromagnetic clusters within the same perovskite-type crystal structure in the Sb substitution range 0.3 < x < 0.8, whereas Sr2FeReO6 and Sr2FeRe0.9Sb0.1O6 are “purely” ferrimagnetic and Sr2FeRe0.1Sb0.9O6 contains antiferromagnetically ordered Fe sites only. Consequently, a replacement of the Re atoms by a nonmagnetic main group element such as Sb blocks the double exchange pathways Fe–O–Re(Sb)–O–Fe along the crystallographic axis of the perovskite unit cell and destroys the itinerant magnetism of the parent compound. The structural and magnetic characterization of Sr2Fe1-xGaxReO6 (0 < x < 0.7) exhibit a Ga/Re antisite disorder which is unexpected because the parent compound Sr2FeReO6 shows no Fe/Re antisite disorder. This antisite disorder strongly depends on the Ga content of the sample. Although the X-ray data do not hint at a phase separation, sample inhomogeneities caused by a demixing are observed by a combination of magnetic characterization and Mößbauer spectroscopy. The 57Fe Mößbauer data suggest the formation of two types of clusters, ferrimagnetic Fe- and paramagnetic Ga-based ones. Below 20 % Ga content, Ga statistically dilutes the Fe–O–Re–O–Fe double exchange pathways. Cluster formation begins at x = 0.2, for 0.2 < x < 0.4 the paramagnetic Ga-based clusters do not contain any Fe. Fe containing Ga-based clusters which can be detected by Mößbauer spectroscopy firstly appear for x = 0.4.
Resumo:
Complex networks analysis is a very popular topic in computer science. Unfortunately this networks, extracted from different contexts, are usually very large and the analysis may be very complicated: computation of metrics on these structures could be very complex. Among all metrics we analyse the extraction of subnetworks called communities: they are groups of nodes that probably play the same role within the whole structure. Communities extraction is an interesting operation in many different fields (biology, economics,...). In this work we present a parallel community detection algorithm that can operate on networks with huge number of nodes and edges. After an introduction to graph theory and high performance computing, we will explain our design strategies and our implementation. Then, we will show some performance evaluation made on a distributed memory architectures i.e. the supercomputer IBM-BlueGene/Q "Fermi" at the CINECA supercomputing center, Italy, and we will comment our results.
Resumo:
This thesis presents some different techniques designed to drive a swarm of robots in an a-priori unknown environment in order to move the group from a starting area to a final one avoiding obstacles. The presented techniques are based on two different theories used alone or in combination: Swarm Intelligence (SI) and Graph Theory. Both theories are based on the study of interactions between different entities (also called agents or units) in Multi- Agent Systems (MAS). The first one belongs to the Artificial Intelligence context and the second one to the Distributed Systems context. These theories, each one from its own point of view, exploit the emergent behaviour that comes from the interactive work of the entities, in order to achieve a common goal. The features of flexibility and adaptability of the swarm have been exploited with the aim to overcome and to minimize difficulties and problems that can affect one or more units of the group, having minimal impact to the whole group and to the common main target. Another aim of this work is to show the importance of the information shared between the units of the group, such as the communication topology, because it helps to maintain the environmental information, detected by each single agent, updated among the swarm. Swarm Intelligence has been applied to the presented technique, through the Particle Swarm Optimization algorithm (PSO), taking advantage of its features as a navigation system. The Graph Theory has been applied by exploiting Consensus and the application of the agreement protocol with the aim to maintain the units in a desired and controlled formation. This approach has been followed in order to conserve the power of PSO and to control part of its random behaviour with a distributed control algorithm like Consensus.
Resumo:
The aim of my thesis is to parallelize the Weighting Histogram Analysis Method (WHAM), which is a popular algorithm used to calculate the Free Energy of a molucular system in Molecular Dynamics simulations. WHAM works in post processing in cooperation with another algorithm called Umbrella Sampling. Umbrella Sampling has the purpose to add a biasing in the potential energy of the system in order to force the system to sample a specific region in the configurational space. Several N independent simulations are performed in order to sample all the region of interest. Subsequently, the WHAM algorithm is used to estimate the original system energy starting from the N atomic trajectories. The parallelization of WHAM has been performed through CUDA, a language that allows to work in GPUs of NVIDIA graphic cards, which have a parallel achitecture. The parallel implementation may sensibly speed up the WHAM execution compared to previous serial CPU imlementations. However, the WHAM CPU code presents some temporal criticalities to very high numbers of interactions. The algorithm has been written in C++ and executed in UNIX systems provided with NVIDIA graphic cards. The results were satisfying obtaining an increase of performances when the model was executed on graphics cards with compute capability greater. Nonetheless, the GPUs used to test the algorithm is quite old and not designated for scientific calculations. It is likely that a further performance increase will be obtained if the algorithm would be executed in clusters of GPU at high level of computational efficiency. The thesis is organized in the following way: I will first describe the mathematical formulation of Umbrella Sampling and WHAM algorithm with their apllications in the study of ionic channels and in Molecular Docking (Chapter 1); then, I will present the CUDA architectures used to implement the model (Chapter 2); and finally, the results obtained on model systems will be presented (Chapter 3).
Resumo:
L'elaborato di tesi tratta dei vantaggi ottenibili dall'uso di tecniche di automatic parameter tuning, applicando un'implementazione di iterated racing su di un innovativo sistema di controllo semaforico auto-organizzante ispirato da concetti di swarm intelligence.
Resumo:
The aim of this thesis was to investigate some important key factors able to promote the prospected growth of the aquaculture sector. The limited availability of fishmeal and fish oil led the attention of the aquafeed industry to reduce the dependency on marine raw materials in favor of vegetable ingredients. In Chapter 2, we reported the effects of fishmeal replacement by a mixture of plant proteins in turbot (Psetta maxima L.) juveniles. At the end of the trial, it was found that over the 15% plant protein inclusion can cause stress and exert negative effects on growth performance and welfare. Climate change aroused the attention of the aquafeed industry toward the production of specific diets capable to counteract high temperatures. In Chapter 3, we investigated the most suitable dietary lipid level for gilthead seabream (Sparus aurata L.) reared at Mediterranean summer temperature. In this trial, it was highlighted that 18% dietary lipid allows a protein sparing effect, thus making the farming of this species economically and environmentally more sustainable. The introduction of new farmed fish species makes necessary the development of new species-specific diets. In Chapter 4, we assessed growth response and feed utilization of common sole (Solea solea L.) juveniles fed graded dietary lipid levels. At the end of the trial, it was found that increasing dietary lipids over 8% led to a substantial decline in growth performance and feed utilization indices. In Chapter 5, we investigated the suitability of mussel meal as alternative ingredient in diets for common sole juveniles. Mussel meal proved to be a very effective alternative ingredient for enhancing growth performance, feed palatability and feed utilization in sole irrespectively to the tested inclusion levels. This thesis highlighted the importance of formulating more specific diets in order to support the aquaculture growth in a sustainable way.
Resumo:
Due to its practical importance and inherent complexity, the optimisation of distribution networks for supplying drinking water has been the subject of extensive study for the past 30 years. The optimization is governed by sizing the pipes in the water distribution network (WDN) and / or optimises specific parts of the network such as pumps, tanks etc. or try to analyse and optimise the reliability of a WDN. In this thesis, the author has analysed two different WDNs (Anytown City and Cabrera city networks), trying to solve and optimise a multi-objective optimisation problem (MOOP). The main two objectives in both cases were the minimisation of Energy Cost (€) or Energy consumption (kWh), along with the total Number of pump switches (TNps) during a day. For this purpose, a decision support system generator for Multi-objective optimisation used. Its name is GANetXL and has been developed by the Center of Water System in the University of Exeter. GANetXL, works by calling the EPANET hydraulic solver, each time a hydraulic analysis has been fulfilled. The main algorithm used, was a second-generation algorithm for multi-objective optimisation called NSGA_II that gave us the Pareto fronts of each configuration. The first experiment that has been carried out was the network of Anytown city. It is a big network with a pump station of four fixed speed parallel pumps that are boosting the water dynamics. The main intervention was to change these pumps to new Variable speed driven pumps (VSDPs), by installing inverters capable to diverse their velocity during the day. Hence, it’s been achieved great Energy and cost savings along with minimisation in the number of pump switches. The results of the research are thoroughly illustrated in chapter 7, with comments and a variety of graphs and different configurations. The second experiment was about the network of Cabrera city. The smaller WDN had a unique FS pump in the system. The problem was the same as far as the optimisation process was concerned, thus, the minimisation of the energy consumption and in parallel the minimisation of TNps. The same optimisation tool has been used (GANetXL).The main scope was to carry out several and different experiments regarding a vast variety of configurations, using different pump (but this time keeping the FS mode), different tank levels, different pipe diameters and different emitters coefficient. All these different modes came up with a large number of results that were compared in the chapter 8. Concluding, it should be said that the optimisation of WDNs is a very interested field that has a vast space of options to deal with. This includes a large number of algorithms to choose from, different techniques and configurations to be made and different support system generators. The researcher has to be ready to “roam” between these choices, till a satisfactory result will convince him/her that has reached a good optimisation point.
Resumo:
Lo scopo del presente lavoro di tesi riguarda la caratterizzazione di un sensore ottico per la lettura di ematocrito e lo sviluppo dell’algoritmo di calibrazione del dispositivo. In altre parole, utilizzando dati ottenuti da una sessione di calibrazione opportunamente pianificata, l’algoritmo sviluppato ha lo scopo di restituire la curva di interpolazione dei dati che caratterizza il trasduttore. I passi principali del lavoro di tesi svolto sono sintetizzati nei punti seguenti: 1) Pianificazione della sessione di calibrazione necessaria per la raccolta dati e conseguente costruzione di un modello black box. Output: dato proveniente dal sensore ottico (lettura espressa in mV) Input: valore di ematocrito espresso in punti percentuali ( questa grandezza rappresenta il valore vero di volume ematico ed è stata ottenuta con un dispositivo di centrifugazione sanguigna) 2) Sviluppo dell’algoritmo L’algoritmo sviluppato e utilizzato offline ha lo scopo di restituire la curva di regressione dei dati. Macroscopicamente, il codice possiamo distinguerlo in due parti principali: 1- Acquisizione dei dati provenienti da sensore e stato di funzionamento della pompa bifasica 2- Normalizzazione dei dati ottenuti rispetto al valore di riferimento del sensore e implementazione dell’algoritmo di regressione. Lo step di normalizzazione dei dati è uno strumento statistico fondamentale per poter mettere a confronto grandezze non uniformi tra loro. Studi presenti, dimostrano inoltre un mutazione morfologica del globulo rosso in risposta a sollecitazioni meccaniche. Un ulteriore aspetto trattato nel presente lavoro, riguarda la velocità del flusso sanguigno determinato dalla pompa e come tale grandezza sia in grado di influenzare la lettura di ematocrito.
Resumo:
Die vorliegende Dissertation beschaftigt sich mit der Steuerung der Absorption und Orbitalenergien von Perylenmonoimiden und Perylendiimiden fur die Anwendung in organischer Photovoltaik (OPV). Eine breite Absorption spielt hier eine wichtige Rolle, um moglichst viel Licht zu ernten, das dann in elektrische Energie umgewandelt wird. Um sicher zu stellen, dass die Zelle ezient arbeiten kann, ist die Abstimmung von Orbitalenergien eine zweite wichtige Voraussetzung. Es werden drei neue Design-Konzepte fur Perylenmonoimid-Sensibilatoren fur Festk orper-Farbstosolarzellen (solid-state dye-sensitised solar cells - sDSSCs) untersucht. Die Synthese, die optischen und elektronischen Eigenschaften der neuen Sensibilisator- Verbindungen sowie ihre Leistungsdaten in sDSSCs werden beschrieben und diskutiert. Die in dieser Arbeit vorgestellten Konzepte reichen von der Einfuhrung von - Abstandhaltern uber neue Funktionalisierungen bis hin zur Erweiterung der Perylenmonimid Grundkorper. Der Push-Pull-Charakter der Systeme variiert von starker Kopplung bis zu kompletter Entkopplung des Donors vom Akzeptor. Dies hat einen starken Ein uss sowohl auf die Absorptionseigenschaften, als auch auf die HOMO/LUMO Energie-Niveaus der Verbindungen. Einige der Konzepte konnen auf Perylendiimide ubertragen werden. Ein Beispiel von Perylendiimid (PDI)-Farbsteuerung wird an einer Reihe von drei Terthiophen-PDIs gezeigt
Resumo:
The dissertation entitled "Tuning of magnetic exchange interactions between organic radicals through bond and space" comprises eight chapters. In the initial part of chapter 1, an overview of organic radicals and their applications were discussed and in the latter part motivation and objective of thesis was described. As the EPR spectroscopy is a necessary tool to study organic radicals, the basic principles of EPR spectroscopy were discussed in chapter 2. rnAntiferromagnetically coupled species can be considered as a source of interacting bosons. Consequently, such biradicals can serve as molecular models of a gas of magnetic excitations which can be used for quantum computing or quantum information processing. Notably, initial small triplet state population in weakly AF coupled biradicals can be switched into larger in the presence of applied magnetic field. Such biradical systems are promising molecular models for studying the phenomena of magnetic field-induced Bose-Einstein condensation in the solid state. To observe such phenomena it is very important to control the intra- as well as inter-molecular magnetic exchange interactions. Chapters 3 to 5 deals with the tuning of intra- and inter-molecular exchange interactions utilizing different approaches. Some of which include changing the length of π-spacer, introduction of functional groups, metal complex formation with diamagnetic metal ion, variation of radical moieties etc. During this study I came across two very interesting molecules 2,7-TMPNO and BPNO, which exist in semi-quinoid form and exhibits characteristic of the biradical and quinoid form simultaneously. The 2,7-TMPNO possesses the singlet-triplet energy gap of ΔEST = –1185 K. So it is nearly unrealistic to observe the magnetic field induced spin switching. So we studied the spin switching of this molecule by photo-excitation which was discussed in chapter 6. The structural similarity of BPNO with Tschitschibabin’s HC allowed us to dig the discrepancies related to ground state of Tschitschibabin’s hydrocarbon(Discussed in chapter 7). Finally, in chapter 8 the synthesis and characterization of a neutral paramagnetic HBC derivative (HBCNO) is discussed. The magneto liquid crystalline properties of HBCNO were studied by DSC and EPR spectroscopy.rn
Resumo:
Il presente lavoro di tesi è stato svolto presso il servizio di Fisica Sanitaria del Policlinico Sant'Orsola-Malpighi di Bologna. Lo studio si è concentrato sul confronto tra le tecniche di ricostruzione standard (Filtered Back Projection, FBP) e quelle iterative in Tomografia Computerizzata. Il lavoro è stato diviso in due parti: nella prima è stata analizzata la qualità delle immagini acquisite con una CT multislice (iCT 128, sistema Philips) utilizzando sia l'algoritmo FBP sia quello iterativo (nel nostro caso iDose4). Per valutare la qualità delle immagini sono stati analizzati i seguenti parametri: il Noise Power Spectrum (NPS), la Modulation Transfer Function (MTF) e il rapporto contrasto-rumore (CNR). Le prime due grandezze sono state studiate effettuando misure su un fantoccio fornito dalla ditta costruttrice, che simulava la parte body e la parte head, con due cilindri di 32 e 20 cm rispettivamente. Le misure confermano la riduzione del rumore ma in maniera differente per i diversi filtri di convoluzione utilizzati. Lo studio dell'MTF invece ha rivelato che l'utilizzo delle tecniche standard e iterative non cambia la risoluzione spaziale; infatti gli andamenti ottenuti sono perfettamente identici (a parte le differenze intrinseche nei filtri di convoluzione), a differenza di quanto dichiarato dalla ditta. Per l'analisi del CNR sono stati utilizzati due fantocci; il primo, chiamato Catphan 600 è il fantoccio utilizzato per caratterizzare i sistemi CT. Il secondo, chiamato Cirs 061 ha al suo interno degli inserti che simulano la presenza di lesioni con densità tipiche del distretto addominale. Lo studio effettuato ha evidenziato che, per entrambi i fantocci, il rapporto contrasto-rumore aumenta se si utilizza la tecnica di ricostruzione iterativa. La seconda parte del lavoro di tesi è stata quella di effettuare una valutazione della riduzione della dose prendendo in considerazione diversi protocolli utilizzati nella pratica clinica, si sono analizzati un alto numero di esami e si sono calcolati i valori medi di CTDI e DLP su un campione di esame con FBP e con iDose4. I risultati mostrano che i valori ricavati con l'utilizzo dell'algoritmo iterativo sono al di sotto dei valori DLR nazionali di riferimento e di quelli che non usano i sistemi iterativi.
Resumo:
La tesi propone una soluzione middleware per scenari in cui i sensori producono un numero elevato di dati che è necessario gestire ed elaborare attraverso operazioni di preprocessing, filtering e buffering al fine di migliorare l'efficienza di comunicazione e del consumo di banda nel rispetto di vincoli energetici e computazionali. E'possibile effettuare l'ottimizzazione di questi componenti attraverso operazioni di tuning remoto.
Resumo:
A novel computerized algorithm for hip joint motion simulation and collision detection, called the Equidistant Method, has been developed. This was compared to three pre-existing methods having different properties regarding definition of the hip joint center and behavior after collision detection. It was proposed that the Equidistant Method would be most accurate in detecting the location and extent of femoroacetabular impingement.
Resumo:
Residual acetabular dysplasia of the hip in most patients can be corrected by periacetabular osteotomy. However, some patients have intraarticular abnormalities causing insufficient coverage, containment or congruency after periacetabular osteotomy, or extraarticular abnormalities that limit either acetabular correction or hip motion. For these patients, we believe an additional proximal femoral osteotomy can improve coverage, containment, congruency and/or motion.