881 resultados para Large-scale Analysis
Resumo:
This work presents hybrid Constraint Programming (CP) and metaheuristic methods for the solution of Large Scale Optimization Problems; it aims at integrating concepts and mechanisms from the metaheuristic methods to a CP-based tree search environment in order to exploit the advantages of both approaches. The modeling and solution of large scale combinatorial optimization problem is a topic which has arisen the interest of many researcherers in the Operations Research field; combinatorial optimization problems are widely spread in everyday life and the need of solving difficult problems is more and more urgent. Metaheuristic techniques have been developed in the last decades to effectively handle the approximate solution of combinatorial optimization problems; we will examine metaheuristics in detail, focusing on the common aspects of different techniques. Each metaheuristic approach possesses its own peculiarities in designing and guiding the solution process; our work aims at recognizing components which can be extracted from metaheuristic methods and re-used in different contexts. In particular we focus on the possibility of porting metaheuristic elements to constraint programming based environments, as constraint programming is able to deal with feasibility issues of optimization problems in a very effective manner. Moreover, CP offers a general paradigm which allows to easily model any type of problem and solve it with a problem-independent framework, differently from local search and metaheuristic methods which are highly problem specific. In this work we describe the implementation of the Local Branching framework, originally developed for Mixed Integer Programming, in a CP-based environment. Constraint programming specific features are used to ease the search process, still mantaining an absolute generality of the approach. We also propose a search strategy called Sliced Neighborhood Search, SNS, that iteratively explores slices of large neighborhoods of an incumbent solution by performing CP-based tree search and encloses concepts from metaheuristic techniques. SNS can be used as a stand alone search strategy, but it can alternatively be embedded in existing strategies as intensification and diversification mechanism. In particular we show its integration within the CP-based local branching. We provide an extensive experimental evaluation of the proposed approaches on instances of the Asymmetric Traveling Salesman Problem and of the Asymmetric Traveling Salesman Problem with Time Windows. The proposed approaches achieve good results on practical size problem, thus demonstrating the benefit of integrating metaheuristic concepts in CP-based frameworks.
Resumo:
Coupled-cluster theory provides one of the most successful concepts in electronic-structure theory. This work covers the parallelization of coupled-cluster energies, gradients, and second derivatives and its application to selected large-scale chemical problems, beside the more practical aspects such as the publication and support of the quantum-chemistry package ACES II MAB and the design and development of a computational environment optimized for coupled-cluster calculations. The main objective of this thesis was to extend the range of applicability of coupled-cluster models to larger molecular systems and their properties and therefore to bring large-scale coupled-cluster calculations into day-to-day routine of computational chemistry. A straightforward strategy for the parallelization of CCSD and CCSD(T) energies, gradients, and second derivatives has been outlined and implemented for closed-shell and open-shell references. Starting from the highly efficient serial implementation of the ACES II MAB computer code an adaptation for affordable workstation clusters has been obtained by parallelizing the most time-consuming steps of the algorithms. Benchmark calculations for systems with up to 1300 basis functions and the presented applications show that the resulting algorithm for energies, gradients and second derivatives at the CCSD and CCSD(T) level of theory exhibits good scaling with the number of processors and substantially extends the range of applicability. Within the framework of the ’High accuracy Extrapolated Ab initio Thermochemistry’ (HEAT) protocols effects of increased basis-set size and higher excitations in the coupled- cluster expansion were investigated. The HEAT scheme was generalized for molecules containing second-row atoms in the case of vinyl chloride. This allowed the different experimental reported values to be discriminated. In the case of the benzene molecule it was shown that even for molecules of this size chemical accuracy can be achieved. Near-quantitative agreement with experiment (about 2 ppm deviation) for the prediction of fluorine-19 nuclear magnetic shielding constants can be achieved by employing the CCSD(T) model together with large basis sets at accurate equilibrium geometries if vibrational averaging and temperature corrections via second-order vibrational perturbation theory are considered. Applying a very similar level of theory for the calculation of the carbon-13 NMR chemical shifts of benzene resulted in quantitative agreement with experimental gas-phase data. The NMR chemical shift study for the bridgehead 1-adamantyl cation at the CCSD(T) level resolved earlier discrepancies of lower-level theoretical treatment. The equilibrium structure of diacetylene has been determined based on the combination of experimental rotational constants of thirteen isotopic species and zero-point vibrational corrections calculated at various quantum-chemical levels. These empirical equilibrium structures agree to within 0.1 pm irrespective of the theoretical level employed. High-level quantum-chemical calculations on the hyperfine structure parameters of the cyanopolyynes were found to be in excellent agreement with experiment. Finally, the theoretically most accurate determination of the molecular equilibrium structure of ferrocene to date is presented.
Resumo:
Bioinformatics, in the last few decades, has played a fundamental role to give sense to the huge amount of data produced. Obtained the complete sequence of a genome, the major problem of knowing as much as possible of its coding regions, is crucial. Protein sequence annotation is challenging and, due to the size of the problem, only computational approaches can provide a feasible solution. As it has been recently pointed out by the Critical Assessment of Function Annotations (CAFA), most accurate methods are those based on the transfer-by-homology approach and the most incisive contribution is given by cross-genome comparisons. In the present thesis it is described a non-hierarchical sequence clustering method for protein automatic large-scale annotation, called “The Bologna Annotation Resource Plus” (BAR+). The method is based on an all-against-all alignment of more than 13 millions protein sequences characterized by a very stringent metric. BAR+ can safely transfer functional features (Gene Ontology and Pfam terms) inside clusters by means of a statistical validation, even in the case of multi-domain proteins. Within BAR+ clusters it is also possible to transfer the three dimensional structure (when a template is available). This is possible by the way of cluster-specific HMM profiles that can be used to calculate reliable template-to-target alignments even in the case of distantly related proteins (sequence identity < 30%). Other BAR+ based applications have been developed during my doctorate including the prediction of Magnesium binding sites in human proteins, the ABC transporters superfamily classification and the functional prediction (GO terms) of the CAFA targets. Remarkably, in the CAFA assessment, BAR+ placed among the ten most accurate methods. At present, as a web server for the functional and structural protein sequence annotation, BAR+ is freely available at http://bar.biocomp.unibo.it/bar2.0.
Resumo:
Concerns over global change and its effect on coral reef survivorship have highlighted the need for long-term datasets and proxy records, to interpret environmental trends and inform policymakers. Citizen science programs have showed to be a valid method for collecting data, reducing financial and time costs for institutions. This study is based on the elaboration of data collected by recreational divers and its main purpose is to evaluate changes in the state of coral reef biodiversity in the Red Sea over a long term period and validate the volunteer-based monitoring method. Volunteers recreational divers completed a questionnaire after each dive, recording the presence of 72 animal taxa and negative reef conditions. Comparisons were made between records from volunteers and independent records from a marine biologist who performed the same dive at the same time. A total of 500 volunteers were tested in 78 validation trials. Relative values of accuracy, reliability and similarity seem to be comparable to those performed by volunteer divers on precise transects in other projects, or in community-based terrestrial monitoring. 9301 recreational divers participated in the monitoring program, completing 23,059 survey questionnaires in a 5-year period. The volunteer-sightings-based index showed significant differences between the geographical areas. The area of Hurghada is distinguished by a medium-low biodiversity index, heavily damaged by a not controlled anthropic exploitation. Coral reefs along the Ras Mohammed National Park at Sharm el Sheikh, conversely showed high biodiversity index. The detected pattern seems to be correlated with the conservation measures adopted. In our experience and that of other research institutes, citizen science can integrate conventional methods and significantly reduce costs and time. Involving recreational divers we were able to build a large data set, covering a wide geographic area. The main limitation remains the difficulty of obtaining an homogeneous spatial sampling distribution.
Resumo:
La Tesi analizza le relazioni tra i processi di sviluppo agricolo e l’uso delle risorse naturali, in particolare di quelle energetiche, a livello internazionale (paesi in via di sviluppo e sviluppati), nazionale (Italia), regionale (Emilia Romagna) e aziendale, con lo scopo di valutare l’eco-efficienza dei processi di sviluppo agricolo, la sua evoluzione nel tempo e le principali dinamiche in relazione anche ai problemi di dipendenza dalle risorse fossili, della sicurezza alimentare, della sostituzione tra superfici agricole dedicate all’alimentazione umana ed animale. Per i due casi studio a livello macroeconomico è stata adottata la metodologia denominata “SUMMA” SUstainability Multi-method, multi-scale Assessment (Ulgiati et al., 2006), che integra una serie di categorie d’impatto dell’analisi del ciclo di vita, LCA, valutazioni costi-benefici e la prospettiva di analisi globale della contabilità emergetica. L’analisi su larga scala è stata ulteriormente arricchita da un caso studio sulla scala locale, di una fattoria produttrice di latte e di energia elettrica rinnovabile (fotovoltaico e biogas). Lo studio condotto mediante LCA e valutazione contingente ha valutato gli effetti ambientali, economici e sociali di scenari di riduzione della dipendenza dalle fonti fossili. I casi studio a livello macroeconomico dimostrano che, nonostante le politiche di supporto all’aumento di efficienza e a forme di produzione “verdi”, l’agricoltura a livello globale continua ad evolvere con un aumento della sua dipendenza dalle fonti energetiche fossili. I primi effetti delle politiche agricole comunitarie verso una maggiore sostenibilità sembrano tuttavia intravedersi per i Paesi Europei. Nel complesso la energy footprint si mantiene alta poiché la meccanizzazione continua dei processi agricoli deve necessariamente attingere da fonti energetiche sostitutive al lavoro umano. Le terre agricole diminuiscono nei paesi europei analizzati e in Italia aumentando i rischi d’insicurezza alimentare giacché la popolazione nazionale sta invece aumentando.
Resumo:
This thesis proposes an integrated holistic approach to the study of neuromuscular fatigue in order to encompass all the causes and all the consequences underlying the phenomenon. Starting from the metabolic processes occurring at the cellular level, the reader is guided toward the physiological changes at the motorneuron and motor unit level and from this to the more general biomechanical alterations. In Chapter 1 a list of the various definitions for fatigue spanning several contexts has been reported. In Chapter 2, the electrophysiological changes in terms of motor unit behavior and descending neural drive to the muscle have been studied extensively as well as the biomechanical adaptations induced. In Chapter 3 a study based on the observation of temporal features extracted from sEMG signals has been reported leading to the need of a more robust and reliable indicator during fatiguing tasks. Therefore, in Chapter 4, a novel bi-dimensional parameter is proposed. The study on sEMG-based indicators opened a scenario also on neurophysiological mechanisms underlying fatigue. For this purpose, in Chapter 5, a protocol designed for the analysis of motor unit-related parameters during prolonged fatiguing contractions is presented. In particular, two methodologies have been applied to multichannel sEMG recordings of isometric contractions of the Tibialis Anterior muscle: the state-of-the-art technique for sEMG decomposition and a coherence analysis on MU spike trains. The importance of a multi-scale approach has been finally highlighted in the context of the evaluation of cycling performance, where fatigue is one of the limiting factors. In particular, the last chapter of this thesis can be considered as a paradigm: physiological, metabolic, environmental, psychological and biomechanical factors influence the performance of a cyclist and only when all of these are kept together in a novel integrative way it is possible to derive a clear model and make correct assessments.
Resumo:
The wide diffusion of cheap, small, and portable sensors integrated in an unprecedented large variety of devices and the availability of almost ubiquitous Internet connectivity make it possible to collect an unprecedented amount of real time information about the environment we live in. These data streams, if properly and timely analyzed, can be exploited to build new intelligent and pervasive services that have the potential of improving people's quality of life in a variety of cross concerning domains such as entertainment, health-care, or energy management. The large heterogeneity of application domains, however, calls for a middleware-level infrastructure that can effectively support their different quality requirements. In this thesis we study the challenges related to the provisioning of differentiated quality-of-service (QoS) during the processing of data streams produced in pervasive environments. We analyze the trade-offs between guaranteed quality, cost, and scalability in streams distribution and processing by surveying existing state-of-the-art solutions and identifying and exploring their weaknesses. We propose an original model for QoS-centric distributed stream processing in data centers and we present Quasit, its prototype implementation offering a scalable and extensible platform that can be used by researchers to implement and validate novel QoS-enforcement mechanisms. To support our study, we also explore an original class of weaker quality guarantees that can reduce costs when application semantics do not require strict quality enforcement. We validate the effectiveness of this idea in a practical use-case scenario that investigates partial fault-tolerance policies in stream processing by performing a large experimental study on the prototype of our novel LAAR dynamic replication technique. Our modeling, prototyping, and experimental work demonstrates that, by providing data distribution and processing middleware with application-level knowledge of the different quality requirements associated to different pervasive data flows, it is possible to improve system scalability while reducing costs.
Resumo:
Statically balanced compliant mechanisms require no holding force throughout their range of motion while maintaining the advantages of compliant mechanisms. In this paper, a postbuckled fixed-guided beam is proposed to provide the negative stiffness to balance the positive stiffness of a compliant mechanism. To that end, a curve decomposition modeling method is presented to simplify the large deflection analysis. The modeling method facilitates parametric design insight and elucidates key points on the force-deflection curve. Experimental results validate the analysis. Furthermore, static balancing with fixed-guided beams is demonstrated for a rectilinear proof-of-concept prototype.