15 resultados para formation of large scale structure
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
In the thesis we present the implementation of the quadratic maximum likelihood (QML) method, ideal to estimate the angular power spectrum of the cross-correlation between cosmic microwave background (CMB) and large scale structure (LSS) maps as well as their individual auto-spectra. Such a tool is an optimal method (unbiased and with minimum variance) in pixel space and goes beyond all the previous harmonic analysis present in the literature. We describe the implementation of the QML method in the {\it BolISW} code and demonstrate its accuracy on simulated maps throughout a Monte Carlo. We apply this optimal estimator to WMAP 7-year and NRAO VLA Sky Survey (NVSS) data and explore the robustness of the angular power spectrum estimates obtained by the QML method. Taking into account the shot noise and one of the systematics (declination correction) in NVSS, we can safely use most of the information contained in this survey. On the contrary we neglect the noise in temperature since WMAP is already cosmic variance dominated on the large scales. Because of a discrepancy in the galaxy auto spectrum between the estimates and the theoretical model, we use two different galaxy distributions: the first one with a constant bias $b$ and the second one with a redshift dependent bias $b(z)$. Finally, we make use of the angular power spectrum estimates obtained by the QML method to derive constraints on the dark energy critical density in a flat $\Lambda$CDM model by different likelihood prescriptions. When using just the cross-correlation between WMAP7 and NVSS maps with 1.8° resolution, we show that $\Omega_\Lambda$ is about the 70\% of the total energy density, disfavouring an Einstein-de Sitter Universe at more than 2 $\sigma$ CL (confidence level).
Resumo:
This work presents hybrid Constraint Programming (CP) and metaheuristic methods for the solution of Large Scale Optimization Problems; it aims at integrating concepts and mechanisms from the metaheuristic methods to a CP-based tree search environment in order to exploit the advantages of both approaches. The modeling and solution of large scale combinatorial optimization problem is a topic which has arisen the interest of many researcherers in the Operations Research field; combinatorial optimization problems are widely spread in everyday life and the need of solving difficult problems is more and more urgent. Metaheuristic techniques have been developed in the last decades to effectively handle the approximate solution of combinatorial optimization problems; we will examine metaheuristics in detail, focusing on the common aspects of different techniques. Each metaheuristic approach possesses its own peculiarities in designing and guiding the solution process; our work aims at recognizing components which can be extracted from metaheuristic methods and re-used in different contexts. In particular we focus on the possibility of porting metaheuristic elements to constraint programming based environments, as constraint programming is able to deal with feasibility issues of optimization problems in a very effective manner. Moreover, CP offers a general paradigm which allows to easily model any type of problem and solve it with a problem-independent framework, differently from local search and metaheuristic methods which are highly problem specific. In this work we describe the implementation of the Local Branching framework, originally developed for Mixed Integer Programming, in a CP-based environment. Constraint programming specific features are used to ease the search process, still mantaining an absolute generality of the approach. We also propose a search strategy called Sliced Neighborhood Search, SNS, that iteratively explores slices of large neighborhoods of an incumbent solution by performing CP-based tree search and encloses concepts from metaheuristic techniques. SNS can be used as a stand alone search strategy, but it can alternatively be embedded in existing strategies as intensification and diversification mechanism. In particular we show its integration within the CP-based local branching. We provide an extensive experimental evaluation of the proposed approaches on instances of the Asymmetric Traveling Salesman Problem and of the Asymmetric Traveling Salesman Problem with Time Windows. The proposed approaches achieve good results on practical size problem, thus demonstrating the benefit of integrating metaheuristic concepts in CP-based frameworks.
Resumo:
The continuous increase of genome sequencing projects produced a huge amount of data in the last 10 years: currently more than 600 prokaryotic and 80 eukaryotic genomes are fully sequenced and publically available. However the sole sequencing process of a genome is able to determine just raw nucleotide sequences. This is only the first step of the genome annotation process that will deal with the issue of assigning biological information to each sequence. The annotation process is done at each different level of the biological information processing mechanism, from DNA to protein, and cannot be accomplished only by in vitro analysis procedures resulting extremely expensive and time consuming when applied at a this large scale level. Thus, in silico methods need to be used to accomplish the task. The aim of this work was the implementation of predictive computational methods to allow a fast, reliable, and automated annotation of genomes and proteins starting from aminoacidic sequences. The first part of the work was focused on the implementation of a new machine learning based method for the prediction of the subcellular localization of soluble eukaryotic proteins. The method is called BaCelLo, and was developed in 2006. The main peculiarity of the method is to be independent from biases present in the training dataset, which causes the over‐prediction of the most represented examples in all the other available predictors developed so far. This important result was achieved by a modification, made by myself, to the standard Support Vector Machine (SVM) algorithm with the creation of the so called Balanced SVM. BaCelLo is able to predict the most important subcellular localizations in eukaryotic cells and three, kingdom‐specific, predictors were implemented. In two extensive comparisons, carried out in 2006 and 2008, BaCelLo reported to outperform all the currently available state‐of‐the‐art methods for this prediction task. BaCelLo was subsequently used to completely annotate 5 eukaryotic genomes, by integrating it in a pipeline of predictors developed at the Bologna Biocomputing group by Dr. Pier Luigi Martelli and Dr. Piero Fariselli. An online database, called eSLDB, was developed by integrating, for each aminoacidic sequence extracted from the genome, the predicted subcellular localization merged with experimental and similarity‐based annotations. In the second part of the work a new, machine learning based, method was implemented for the prediction of GPI‐anchored proteins. Basically the method is able to efficiently predict from the raw aminoacidic sequence both the presence of the GPI‐anchor (by means of an SVM), and the position in the sequence of the post‐translational modification event, the so called ω‐site (by means of an Hidden Markov Model (HMM)). The method is called GPIPE and reported to greatly enhance the prediction performances of GPI‐anchored proteins over all the previously developed methods. GPIPE was able to predict up to 88% of the experimentally annotated GPI‐anchored proteins by maintaining a rate of false positive prediction as low as 0.1%. GPIPE was used to completely annotate 81 eukaryotic genomes, and more than 15000 putative GPI‐anchored proteins were predicted, 561 of which are found in H. sapiens. In average 1% of a proteome is predicted as GPI‐anchored. A statistical analysis was performed onto the composition of the regions surrounding the ω‐site that allowed the definition of specific aminoacidic abundances in the different considered regions. Furthermore the hypothesis that compositional biases are present among the four major eukaryotic kingdoms, proposed in literature, was tested and rejected. All the developed predictors and databases are freely available at: BaCelLo http://gpcr.biocomp.unibo.it/bacello eSLDB http://gpcr.biocomp.unibo.it/esldb GPIPE http://gpcr.biocomp.unibo.it/gpipe
Resumo:
Flood disasters are a major cause of fatalities and economic losses, and several studies indicate that global flood risk is currently increasing. In order to reduce and mitigate the impact of river flood disasters, the current trend is to integrate existing structural defences with non structural measures. This calls for a wider application of advanced hydraulic models for flood hazard and risk mapping, engineering design, and flood forecasting systems. Within this framework, two different hydraulic models for large scale analysis of flood events have been developed. The two models, named CA2D and IFD-GGA, adopt an integrated approach based on the diffusive shallow water equations and a simplified finite volume scheme. The models are also designed for massive code parallelization, which has a key importance in reducing run times in large scale and high-detail applications. The two models were first applied to several numerical cases, to test the reliability and accuracy of different model versions. Then, the most effective versions were applied to different real flood events and flood scenarios. The IFD-GGA model showed serious problems that prevented further applications. On the contrary, the CA2D model proved to be fast and robust, and able to reproduce 1D and 2D flow processes in terms of water depth and velocity. In most applications the accuracy of model results was good and adequate to large scale analysis. Where complex flow processes occurred local errors were observed, due to the model approximations. However, they did not compromise the correct representation of overall flow processes. In conclusion, the CA model can be a valuable tool for the simulation of a wide range of flood event types, including lowland and flash flood events.
Resumo:
Bioinformatics, in the last few decades, has played a fundamental role to give sense to the huge amount of data produced. Obtained the complete sequence of a genome, the major problem of knowing as much as possible of its coding regions, is crucial. Protein sequence annotation is challenging and, due to the size of the problem, only computational approaches can provide a feasible solution. As it has been recently pointed out by the Critical Assessment of Function Annotations (CAFA), most accurate methods are those based on the transfer-by-homology approach and the most incisive contribution is given by cross-genome comparisons. In the present thesis it is described a non-hierarchical sequence clustering method for protein automatic large-scale annotation, called “The Bologna Annotation Resource Plus” (BAR+). The method is based on an all-against-all alignment of more than 13 millions protein sequences characterized by a very stringent metric. BAR+ can safely transfer functional features (Gene Ontology and Pfam terms) inside clusters by means of a statistical validation, even in the case of multi-domain proteins. Within BAR+ clusters it is also possible to transfer the three dimensional structure (when a template is available). This is possible by the way of cluster-specific HMM profiles that can be used to calculate reliable template-to-target alignments even in the case of distantly related proteins (sequence identity < 30%). Other BAR+ based applications have been developed during my doctorate including the prediction of Magnesium binding sites in human proteins, the ABC transporters superfamily classification and the functional prediction (GO terms) of the CAFA targets. Remarkably, in the CAFA assessment, BAR+ placed among the ten most accurate methods. At present, as a web server for the functional and structural protein sequence annotation, BAR+ is freely available at http://bar.biocomp.unibo.it/bar2.0.
Resumo:
The wide diffusion of cheap, small, and portable sensors integrated in an unprecedented large variety of devices and the availability of almost ubiquitous Internet connectivity make it possible to collect an unprecedented amount of real time information about the environment we live in. These data streams, if properly and timely analyzed, can be exploited to build new intelligent and pervasive services that have the potential of improving people's quality of life in a variety of cross concerning domains such as entertainment, health-care, or energy management. The large heterogeneity of application domains, however, calls for a middleware-level infrastructure that can effectively support their different quality requirements. In this thesis we study the challenges related to the provisioning of differentiated quality-of-service (QoS) during the processing of data streams produced in pervasive environments. We analyze the trade-offs between guaranteed quality, cost, and scalability in streams distribution and processing by surveying existing state-of-the-art solutions and identifying and exploring their weaknesses. We propose an original model for QoS-centric distributed stream processing in data centers and we present Quasit, its prototype implementation offering a scalable and extensible platform that can be used by researchers to implement and validate novel QoS-enforcement mechanisms. To support our study, we also explore an original class of weaker quality guarantees that can reduce costs when application semantics do not require strict quality enforcement. We validate the effectiveness of this idea in a practical use-case scenario that investigates partial fault-tolerance policies in stream processing by performing a large experimental study on the prototype of our novel LAAR dynamic replication technique. Our modeling, prototyping, and experimental work demonstrates that, by providing data distribution and processing middleware with application-level knowledge of the different quality requirements associated to different pervasive data flows, it is possible to improve system scalability while reducing costs.
Resumo:
The correlations between the evolution of the Super Massive Black Holes (SMBHs) and their host galaxies suggests that the SMBH accretion on sub-pc scales (active galactice nuclei, AGN) is linked to the building of the galaxy over kpc scales, through the so called AGN feedback. Most of the galaxy assembly occurs in overdense large scale structures (LSSs). AGN residing in powerful sources in LSSs, such as the proto-brightest cluster galaxies (BCGs), can affect the evolution of the surrounding intra-cluster medium (ICM) and nearby galaxies. Among distant AGN, high-redshift radio-galaxies (HzRGs) are found to be excellent BCG progenitor candidates. In this Thesis we analyze novel interferometric observations of the so-called "J1030" field centered around the z = 6.3 SDSS Quasar J1030+0524, carried out with the Atacama large (sub-)millimetre array (ALMA) and the Jansky very large array (JVLA). This field host a LSS assembling around a powerful HzRG at z = 1.7 that shows evidence of positive AGN feedback in heating the surrounding ICM and promoting star-formation in multiple galaxies at hundreds kpc distances. We report the detection of gas-rich members of the LSS, including the HzRG. We showed that the LSS is going to evolve into a local massive cluster and the HzRG is the proto-BCG. we unveiled signatures of the proto-BCG's interaction with the surrounding ICM, strengthening the positive AGN feedback scenario. From the JVLA observations of the "J1030" we extracted one of the deepest extra-galactic radio surveys to date (~12.5 uJy at 5 sigma). Exploiting the synergy with the X-ray deep survey (~500 ks) we investigated the relation of the X-ray/radio emission of a X-ray-selected sample, unveiling that the radio emission is powered by different processes (star-formation and AGN), and that AGN-driven sample is mostly composed by radio-quiet objects that display a significant X-ray/radio correlation.
Resumo:
In this thesis we present the development and the current status of the IFrameNet project, aimed at the construction of a large-scale lexical semantic resource for the Italian language based on Frame Semantics theories. We will begin by contextualizing our work in the wider context of Frame Semantics and of the FrameNet project, which, since 1997, has attempted to apply these theories to lexicography. We will then analyse and discuss the applicability of the structure of the American resource to Italian and more specifically we will focus on the domain of fear, worry, and anxiety. We will finally propose some modifications aimed at improving this domain of the resource in relation to its coherence, its ability to accurately represent the linguistic reality and in particular in order to make it possible to apply it to Italian.
Resumo:
Transcription is controlled by promoter-selective transcriptional factors (TFs), which bind to cis-regulatory enhancers elements, termed hormone response elements (HREs), in a specific subset of genes. Regulation by these factors involves either the recruitment of coactivators or corepressors and direct interaction with the basal transcriptional machinery (1). Hormone-activated nuclear receptors (NRs) are well characterized transcriptional factors (2) that bind to the promoters of their target genes and recruit primary and secondary coactivator proteins which possess many enzymatic activities required for gene expression (1,3,4). In the present study, using single-cell high-resolution fluorescent microscopy and high throughput microscopy (HTM) coupled to computational imaging analysis, we investigated transcriptional regulation controlled by the estrogen receptor alpha (ERalpha), in terms of large scale chromatin remodeling and interaction with the associated coactivator SRC-3 (Steroid Receptor Coactivator-3), a member of p160 family (28) primary coactivators. ERalpha is a steroid-dependent transcriptional factor (16) that belongs to the NRs superfamily (2,3) and, in response to the hormone 17-ß estradiol (E2), regulates transcription of distinct target genes involved in development, puberty, and homeostasis (8,16). ERalpha spends most of its lifetime in the nucleus and undergoes a rapid (within minutes) intranuclear redistribution following the addition of either agonist or antagonist (17,18,19). We designed a HeLa cell line (PRL-HeLa), engineered with a chromosomeintegrated reporter gene array (PRL-array) containing multicopy hormone response-binding elements for ERalpha that are derived from the physiological enhancer/promoter region of the prolactin gene. Following GFP-ER transfection of PRL-HeLa cells, we were able to observe in situ ligand dependent (i) recruitment to the array of the receptor and associated coregulators, (ii) chromatin remodeling, and (iii) direct transcriptional readout of the reporter gene. Addition of E2 causes a visible opening (decondensation) of the PRL-array, colocalization of RNA Polymerase II, and transcriptional readout of the reporter gene, detected by mRNA FISH. On the contrary, when cells were treated with an ERalpha antagonist (Tamoxifen or ICI), a dramatic condensation of the PRL-array was observed, displacement of RNA Polymerase II, and complete decreasing in the transcriptional FISH signal. All p160 family coactivators (28) colocalize with ERalpha at the PRL-array. Steroid Receptor Coactivator-3 (SRC-3/AIB1/ACTR/pCIP/RAC3/TRAM1) is a p160 family member and a known oncogenic protein (4,34). SRC-3 is regulated by a variety of posttranslational modifications, including methylation, phosphorylation, acetylation, ubiquitination and sumoylation (4,35). These events have been shown to be important for its interaction with other coactivator proteins and NRs and for its oncogenic potential (37,39). A number of extracellular signaling molecules, like steroid hormones, growth factors and cytokines, induce SRC-3 phosphorylation (40). These actions are mediated by a wide range of kinases, including extracellular-regulated kinase 1 and 2 (ERK1-2), c-Jun N-terminal kinase, p38 MAPK, and IkB kinases (IKKs) (41,42,43). Here, we report SRC-3 to be a nucleocytoplasmic shuttling protein, whose cellular localization is regulated by phosphorylation and interaction with ERalpha. Using a combination of high throughput and fluorescence microscopy, we show that both chemical inhibition (with U0126) and siRNA downregulation of the MAP/ERK1/2 kinase (MEK1/2) pathway induce a cytoplasmic shift in SRC-3 localization, whereas stimulation by EGF signaling enhances its nuclear localization by inducing phosphorylation at T24, S857, and S860, known partecipants in the regulation of SRC-3 activity (39). Accordingly, the cytoplasmic localization of a non-phosphorylatable SRC-3 mutant further supports these results. In the presence of ERalpha, U0126 also dramatically reduces: hormone-dependent colocalization of ERalpha and SRC-3 in the nucleus; formation of ER-SRC-3 coimmunoprecipitation complex in cell lysates; localization of SRC-3 at the ER-targeted prolactin promoter array (PRL-array) and transcriptional activity. Finally, we show that SRC-3 can also function as a cotransporter, facilitating the nuclear-cytoplasmic shuttling of estrogen receptor. While a wealth of studies have revealed the molecular functions of NRs and coregulators, there is a paucity of data on how these functions are spatiotemporally organized in the cellular context. Technically and conceptually, our findings have a new impact upon evaluating gene transcriptional control and mechanisms of action of gene regulators.
Resumo:
Coordinating activities in a distributed system is an open research topic. Several models have been proposed to achieve this purpose such as message passing, publish/subscribe, workflows or tuple spaces. We have focused on the latter model, trying to overcome some of its disadvantages. In particular we have applied spatial database techniques to tuple spaces in order to increase their performance when handling a large number of tuples. Moreover, we have studied how structured peer to peer approaches can be applied to better distribute tuples on large networks. Using some of these result, we have developed a tuple space implementation for the Globus Toolkit that can be used by Grid applications as a coordination service. The development of such a service has been quite challenging due to the limitations imposed by XML serialization that have heavily influenced its design. Nevertheless, we were able to complete its implementation and use it to implement two different types of test applications: a completely parallelizable one and a plasma simulation that is not completely parallelizable. Using this last application we have compared the performance of our service against MPI. Finally, we have developed and tested a simple workflow in order to show the versatility of our service.
Resumo:
Concerns over global change and its effect on coral reef survivorship have highlighted the need for long-term datasets and proxy records, to interpret environmental trends and inform policymakers. Citizen science programs have showed to be a valid method for collecting data, reducing financial and time costs for institutions. This study is based on the elaboration of data collected by recreational divers and its main purpose is to evaluate changes in the state of coral reef biodiversity in the Red Sea over a long term period and validate the volunteer-based monitoring method. Volunteers recreational divers completed a questionnaire after each dive, recording the presence of 72 animal taxa and negative reef conditions. Comparisons were made between records from volunteers and independent records from a marine biologist who performed the same dive at the same time. A total of 500 volunteers were tested in 78 validation trials. Relative values of accuracy, reliability and similarity seem to be comparable to those performed by volunteer divers on precise transects in other projects, or in community-based terrestrial monitoring. 9301 recreational divers participated in the monitoring program, completing 23,059 survey questionnaires in a 5-year period. The volunteer-sightings-based index showed significant differences between the geographical areas. The area of Hurghada is distinguished by a medium-low biodiversity index, heavily damaged by a not controlled anthropic exploitation. Coral reefs along the Ras Mohammed National Park at Sharm el Sheikh, conversely showed high biodiversity index. The detected pattern seems to be correlated with the conservation measures adopted. In our experience and that of other research institutes, citizen science can integrate conventional methods and significantly reduce costs and time. Involving recreational divers we were able to build a large data set, covering a wide geographic area. The main limitation remains the difficulty of obtaining an homogeneous spatial sampling distribution.
Resumo:
This thesis is a collection of works focused on the topic of Earthquake Early Warning, with a special attention to large magnitude events. The topic is addressed from different points of view and the structure of the thesis reflects the variety of the aspects which have been analyzed. The first part is dedicated to the giant, 2011 Tohoku-Oki earthquake. The main features of the rupture process are first discussed. The earthquake is then used as a case study to test the feasibility Early Warning methodologies for very large events. Limitations of the standard approaches for large events arise in this chapter. The difficulties are related to the real-time magnitude estimate from the first few seconds of recorded signal. An evolutionary strategy for the real-time magnitude estimate is proposed and applied to the single Tohoku-Oki earthquake. In the second part of the thesis a larger number of earthquakes is analyzed, including small, moderate and large events. Starting from the measurement of two Early Warning parameters, the behavior of small and large earthquakes in the initial portion of recorded signals is investigated. The aim is to understand whether small and large earthquakes can be distinguished from the initial stage of their rupture process. A physical model and a plausible interpretation to justify the observations are proposed. The third part of the thesis is focused on practical, real-time approaches for the rapid identification of the potentially damaged zone during a seismic event. Two different approaches for the rapid prediction of the damage area are proposed and tested. The first one is a threshold-based method which uses traditional seismic data. Then an innovative approach using continuous, GPS data is explored. Both strategies improve the prediction of large scale effects of strong earthquakes.
Resumo:
Adhesion, immune evasion and invasion are key determinants during bacterial pathogenesis. Pathogenic bacteria possess a wide variety of surface exposed and secreted proteins which allow them to adhere to tissues, escape the immune system and spread throughout the human body. Therefore, extensive contacts between the human and the bacterial extracellular proteomes take place at the host-pathogen interface at the protein level. Recent researches emphasized the importance of a global and deeper understanding of the molecular mechanisms which underlie bacterial immune evasion and pathogenesis. Through the use of a large-scale, unbiased, protein microarray-based approach and of wide libraries of human and bacterial purified proteins, novel host-pathogen interactions were identified. This approach was first applied to Staphylococcus aureus, cause of a wide variety of diseases ranging from skin infections to endocarditis and sepsis. The screening led to the identification of several novel interactions between the human and the S. aureus extracellular proteomes. The interaction between the S. aureus immune evasion protein FLIPr (formyl-peptide receptor like-1 inhibitory protein) and the human complement component C1q, key players of the offense-defense fighting, was characterized using label-free techniques and functional assays. The same approach was also applied to Neisseria meningitidis, major cause of bacterial meningitis and fulminant sepsis worldwide. The screening led to the identification of several potential human receptors for the neisserial adhesin A (NadA), an important adhesion protein and key determinant of meningococcal interactions with the human host at various stages. The interaction between NadA and human LOX-1 (low-density oxidized lipoprotein receptor) was confirmed using label-free technologies and cell binding experiments in vitro. Taken together, these two examples provided concrete insights into S. aureus and N. meningitidis pathogenesis, and identified protein microarray coupled with appropriate validation methodologies as a powerful large scale tool for host-pathogen interactions studies.
Resumo:
As a large and long-lived species with high economic value, restricted spawning areas and short spawning periods, the Atlantic bluefin tuna (BFT; Thunnus thynnus) is particularly susceptible to over-exploitation. Although BFT have been targeted by fisheries in the Mediterranean Sea for thousands of years, it has only been in these last decades that the exploitation rate has reached far beyond sustainable levels. An understanding of the population structure, spatial dynamics, exploitation rates and the environmental variables that affect BFT is crucial for the conservation of the species. The aims of this PhD project were 1) to assess the accuracy of larval identification methods, 2) determine the genetic structure of modern BFT populations, 3) assess the self-recruitment rate in the Gulf of Mexico and Mediterranean spawning areas, 4) estimate the immigration rate of BFT to feeding aggregations from the various spawning areas, and 5) develop tools capable of investigating the temporal stability of population structuring in the Mediterranean Sea. Several weaknesses in modern morphology-based taxonomy including demographic decline of expert taxonomists, flawed identification keys, reluctance of the taxonomic community to embrace advances in digital communications and a general scarcity of modern user-friendly materials are reviewed. Barcoding of scombrid larvae revealed important differences in the accuracy of the taxonomic identifications carried out by different ichthyoplanktologists following morphology-based methods. Using a Genotyping-by-Sequencing a panel of 95 SNPs was developed and used to characterize the population structuring of BFT and composition of adult feeding aggregations. Using novel molecular techniques, DNA was extracted from bluefin tuna vertebrae excavated from late iron age, ancient roman settlements Byzantine-era Constantinople and a 20th century collection. A second panel of 96 SNPs was developed to genotype historical and modern samples in order to elucidate changes in population structuring and allele frequencies of loci associated with selective traits.
Resumo:
Several decision and control tasks in cyber-physical networks can be formulated as large- scale optimization problems with coupling constraints. In these "constraint-coupled" problems, each agent is associated to a local decision variable, subject to individual constraints. This thesis explores the use of primal decomposition techniques to develop tailored distributed algorithms for this challenging set-up over graphs. We first develop a distributed scheme for convex problems over random time-varying graphs with non-uniform edge probabilities. The approach is then extended to unknown cost functions estimated online. Subsequently, we consider Mixed-Integer Linear Programs (MILPs), which are of great interest in smart grid control and cooperative robotics. We propose a distributed methodological framework to compute a feasible solution to the original MILP, with guaranteed suboptimality bounds, and extend it to general nonconvex problems. Monte Carlo simulations highlight that the approach represents a substantial breakthrough with respect to the state of the art, thus representing a valuable solution for new toolboxes addressing large-scale MILPs. We then propose a distributed Benders decomposition algorithm for asynchronous unreliable networks. The framework has been then used as starting point to develop distributed methodologies for a microgrid optimal control scenario. We develop an ad-hoc distributed strategy for a stochastic set-up with renewable energy sources, and show a case study with samples generated using Generative Adversarial Networks (GANs). We then introduce a software toolbox named ChoiRbot, based on the novel Robot Operating System 2, and show how it facilitates simulations and experiments in distributed multi-robot scenarios. Finally, we consider a Pickup-and-Delivery Vehicle Routing Problem for which we design a distributed method inspired to the approach of general MILPs, and show the efficacy through simulations and experiments in ChoiRbot with ground and aerial robots.