923 resultados para Search Engine Optimization Methods
Resumo:
In this thesis we made the first steps towards the systematic application of a methodology for automatically building formal models of complex biological systems. Such a methodology could be useful also to design artificial systems possessing desirable properties such as robustness and evolvability. The approach we follow in this thesis is to manipulate formal models by means of adaptive search methods called metaheuristics. In the first part of the thesis we develop state-of-the-art hybrid metaheuristic algorithms to tackle two important problems in genomics, namely, the Haplotype Inference by parsimony and the Founder Sequence Reconstruction Problem. We compare our algorithms with other effective techniques in the literature, we show strength and limitations of our approaches to various problem formulations and, finally, we propose further enhancements that could possibly improve the performance of our algorithms and widen their applicability. In the second part, we concentrate on Boolean network (BN) models of gene regulatory networks (GRNs). We detail our automatic design methodology and apply it to four use cases which correspond to different design criteria and address some limitations of GRN modeling by BNs. Finally, we tackle the Density Classification Problem with the aim of showing the learning capabilities of BNs. Experimental evaluation of this methodology shows its efficacy in producing network that meet our design criteria. Our results, coherently to what has been found in other works, also suggest that networks manipulated by a search process exhibit a mixture of characteristics typical of different dynamical regimes.
Resumo:
Das Standardmodell der Teilchenphysik, das drei der vier fundamentalen Wechselwirkungen beschreibt, stimmt bisher sehr gut mit den Messergebnissen der Experimente am CERN, dem Fermilab und anderen Forschungseinrichtungen überein. rnAllerdings können im Rahmen dieses Modells nicht alle Fragen der Teilchenphysik beantwortet werden. So lässt sich z.B. die vierte fundamentale Kraft, die Gravitation, nicht in das Standardmodell einbauen.rnDarüber hinaus hat das Standardmodell auch keinen Kandidaten für dunkle Materie, die nach kosmologischen Messungen etwa 25 % unseres Universum ausmacht.rnAls eine der vielversprechendsten Lösungen für diese offenen Fragen wird die Supersymmetrie angesehen, die eine Symmetrie zwischen Fermionen und Bosonen einführt. rnAus diesem Modell ergeben sich sogenannte supersymmetrische Teilchen, denen jeweils ein Standardmodell-Teilchen als Partner zugeordnet sind.rnEin mögliches Modell dieser Symmetrie ist das R-Paritätserhaltende mSUGRA-Modell, falls Supersymmetrie in der Natur realisiert ist.rnIn diesem Modell ist das leichteste supersymmetrische Teilchen (LSP) neutral und schwach wechselwirkend, sodass es nicht direkt im Detektor nachgewiesen werden kann, sondern indirekt über die vom LSP fortgetragene Energie, die fehlende transversale Energie (etmiss), nachgewiesen werden muss.rnrnDas ATLAS-Experiment wird 2010 mit Hilfe des pp-Beschleunigers LHC mit einer Schwerpunktenergie von sqrt(s)=7-10 TeV mit einer Luminosität von 10^32 #/(cm^2*s) mit der Suche nach neuer Physik starten.rnDurch die sehr hohe Datenrate, resultierend aus den etwa 10^8 Auslesekanälen des ATLAS-Detektors bei einer Bunchcrossingrate von 40 MHz, wird ein Triggersystem benötigt, um die zu speichernde Datenmenge zu reduzieren.rnDabei muss ein Kompromiss zwischen der verfügbaren Triggerrate und einer sehr hohen Triggereffizienz für die interessanten Ereignisse geschlossen werden, da etwa nur jedes 10^8-te Ereignisse für die Suche nach neuer Physik interessant ist.rnZur Erfüllung der Anforderungen an das Triggersystem wird im Experiment ein dreistufiges System verwendet, bei dem auf der ersten Triggerstufe mit Abstand die höchste Datenreduktion stattfindet.rnrnIm Rahmen dieser Arbeit rn%, die vollständig auf Monte-Carlo-Simulationen basiert, rnist zum einen ein wesentlicher Beitrag zum grundlegenden Verständnis der Eigenschaft der fehlenden transversalen Energie auf der ersten Triggerstufe geleistet worden.rnZum anderen werden Methoden vorgestellt, mit denen es möglich ist, die etmiss-Triggereffizienz für Standardmodellprozesse und mögliche mSUGRA-Szenarien aus Daten zu bestimmen. rnBei der Optimierung der etmiss-Triggerschwellen für die erste Triggerstufe ist die Triggerrate bei einer Luminosität von 10^33 #/(cm^2*s) auf 100 Hz festgelegt worden.rnFür die Triggeroptimierung wurden verschiedene Simulationen benötigt, bei denen eigene Entwicklungsarbeit eingeflossen ist.rnMit Hilfe dieser Simulationen und den entwickelten Optimierungsalgorithmen wird gezeigt, dass trotz der niedrigen Triggerrate das Entdeckungspotential (für eine Signalsignifikanz von mindestens 5 sigma) durch Kombinationen der etmiss-Schwelle mit Lepton bzw. Jet-Triggerschwellen gegenüber dem bestehenden ATLAS-Triggermenü auf der ersten Triggerstufe um bis zu 66 % erhöht wird.
Resumo:
La presente dissertazione investiga la possibilità di ottimizzare l’uso di energia a bordo di una nave per trasporto di prodotti chimici e petrolchimici. Il software sviluppato per questo studio può essere adattato a qualsiasi tipo di nave. Tale foglio di calcolo fornisce la metodologia per stimare vantaggi e miglioramenti energetici, con accuratezza direttamente proporzionale ai dati disponibili sulla configurazione del sistema energetico e sui dispositivi installati a bordo. Lo studio si basa su differenti fasi che permettono la semplificazione del lavoro; nell’introduzione sono indicati i dati necessari per svolgere un’accurata analisi ed è presentata la metodologia adottata. Inizialmente è fornita una spiegazione sul layout dell’impianto, sulle sue caratteristiche e sui principali dispositivi installati a bordo. Vengono dunque trattati separatamente i principali carichi, meccanico, elettrico e termico. In seguito si procede con una selezione delle principali fasi operative della nave: è seguito tale approccio in modo da comprendere meglio la ripartizione della richiesta di potenza a bordo della nave e il suo sfruttamento. Successivamente è svolto un controllo sul dimensionamento del sistema elettrico: ciò aiuta a comprendere se la potenza stimata dai progettisti sia assimilabile a quella effettivamente richiesta sulla nave. Si ottengono in seguito curve di carico meccanico, elettrico e termico in funzione del tempo per tutte le fasi operative considerate: tramite l’uso del software Visual Basic Application (VBA) vengono creati i profili di carico che possono essere gestiti nella successiva fase di ottimizzazione. L’ottimizzazione rappresenta il cuore di questo studio; i profili di potenza ottenuti dalla precedente fase sono gestiti in modo da conseguire un sistema che sia in grado di fornire potenza alla nave nel miglior modo possibile da un punto di vista energetico. Il sistema energetico della nave è modellato e ottimizzato mantenendo lo status quo dei dispositivi di bordo, per i quali sono considerate le configurazioni di “Load following”, “two shifts” e “minimal”. Una successiva investigazione riguarda l’installazione a bordo di un sistema di accumulo di energia termica, così da migliorare lo sfruttamento dell’energia disponibile. Infine, nella conclusione, sono messi a confronto i reali consumi della nave con i risultati ottenuti con e senza l’introduzione del sistema di accumulo termico. Attraverso la configurazione “minimal” è possibile risparmiare circa l’1,49% dell’energia totale consumata durante un anno di attività; tale risparmio è completamente gratuito poiché può essere raggiunto seguendo alcune semplici regole nella gestione dell’energia a bordo. L’introduzione di un sistema di accumulo termico incrementa il risparmio totale fino al 4,67% con un serbatoio in grado di accumulare 110000 kWh di energia termica; tuttavia, in questo caso, è necessario sostenere il costo di installazione del serbatoio. Vengono quindi dibattuti aspetti economici e ambientali in modo da spiegare e rendere chiari i vantaggi che si possono ottenere con l’applicazione di questo studio, in termini di denaro e riduzione di emissioni in atmosfera.
Resumo:
The safety systems of nuclear power plants rely on low-voltage power, instrumentation and control cables. Inside the containment area, cables operate in harsh environments, characterized by relatively high temperature and gamma-irradiation. As these cables are related to fundamental safety systems, they must be able to withstand unexpected accident conditions and, therefore, their condition assessment is of utmost importance as plants age and lifetime extensions are required. Nowadays, the integrity and functionality of these cables are monitored mainly through destructive test which requires specific laboratory. The investigation of electrical aging markers which can provide information about the state of the cable by non-destructive testing methods would improve significantly the present diagnostic techniques. This work has been made within the framework of the ADVANCE (Aging Diagnostic and Prognostics of Low-Voltage I\&C Cables) project, a FP7 European program. This Ph.D. thesis aims at studying the impact of aging on cable electrical parameters, in order to understand the evolution of the electrical properties associated with cable degradation. The identification of suitable aging markers requires the comparison of the electrical property variation with the physical/chemical degradation mechanisms of polymers for different insulating materials and compositions. The feasibility of non-destructive electrical condition monitoring techniques as potential substitutes for destructive methods will be finally discussed studying the correlation between electrical and mechanical properties. In this work, the electrical properties of cable insulators are monitored and characterized mainly by dielectric spectroscopy, polarization/depolarization current analysis and space charge distribution. Among these techniques, dielectric spectroscopy showed the most promising results; by means of dielectric spectroscopy it is possible to identify the frequency range where the properties are more sensitive to aging. In particular, the imaginary part of permittivity at high frequency, which is related to oxidation, has been identified as the most suitable aging marker based on electrical quantities.
Resumo:
Thermal effects are rapidly gaining importance in nanometer heterogeneous integrated systems. Increased power density, coupled with spatio-temporal variability of chip workload, cause lateral and vertical temperature non-uniformities (variations) in the chip structure. The assumption of an uniform temperature for a large circuit leads to inaccurate determination of key design parameters. To improve design quality, we need precise estimation of temperature at detailed spatial resolution which is very computationally intensive. Consequently, thermal analysis of the designs needs to be done at multiple levels of granularity. To further investigate the flow of chip/package thermal analysis we exploit the Intel Single Chip Cloud Computer (SCC) and propose a methodology for calibration of SCC on-die temperature sensors. We also develop an infrastructure for online monitoring of SCC temperature sensor readings and SCC power consumption. Having the thermal simulation tool in hand, we propose MiMAPT, an approach for analyzing delay, power and temperature in digital integrated circuits. MiMAPT integrates seamlessly into industrial Front-end and Back-end chip design flows. It accounts for temperature non-uniformities and self-heating while performing analysis. Furthermore, we extend the temperature variation aware analysis of designs to 3D MPSoCs with Wide-I/O DRAM. We improve the DRAM refresh power by considering the lateral and vertical temperature variations in the 3D structure and adapting the per-DRAM-bank refresh period accordingly. We develop an advanced virtual platform which models the performance, power, and thermal behavior of a 3D-integrated MPSoC with Wide-I/O DRAMs in detail. Moving towards real-world multi-core heterogeneous SoC designs, a reconfigurable heterogeneous platform (ZYNQ) is exploited to further study the performance and energy efficiency of various CPU-accelerator data sharing methods in heterogeneous hardware architectures. A complete hardware accelerator featuring clusters of OpenRISC CPUs, with dynamic address remapping capability is built and verified on a real hardware.
Resumo:
This work deals with the car sequencing (CS) problem, a combinatorial optimization problem for sequencing mixed-model assembly lines. The aim is to find a production sequence for different variants of a common base product, such that work overload of the respective line operators is avoided or minimized. The variants are distinguished by certain options (e.g., sun roof yes/no) and, therefore, require different processing times at the stations of the line. CS introduces a so-called sequencing rule H:N for each option, which restricts the occurrence of this option to at most H in any N consecutive variants. It seeks for a sequence that leads to no or a minimum number of sequencing rule violations. In this work, CS’ suitability for workload-oriented sequencing is analyzed. Therefore, its solution quality is compared in experiments to the related mixed-model sequencing problem. A new sequencing rule generation approach as well as a new lower bound for the problem are presented. Different exact and heuristic solution methods for CS are developed and their efficiency is shown in experiments. Furthermore, CS is adjusted and applied to a resequencing problem with pull-off tables.
Resumo:
This thesis presents an analysis for the search of Supersymmetry with the ATLAS detector at the LHC. The final state with one lepton, several coloured particles and large missing transverse energy was chosen. Particular emphasis was placed on the optimization of the requirements for lepton identification. This optimization showed to be particularly useful when combining with multi-lepton selections. The systematic error associated with the higher order QCD diagrams in Monte Carlo production is given particular focus. Methods to verify and correct the energy measurement of hadronic showers are developed. Methods for the identification and removal of mismeasurements caused by the detector are found in the single muon and four jet environment are applied. A new detector simulation system is shown to provide good prospects for future fast Monte Carlo production. The analysis was performed for $35pb^{-1}$ and no significant deviation from the Standard Model is seen. Exclusion limits subchannel for minimal Supergravity. Previous limits set by Tevatron and LEP are extended.
Resumo:
In this thesis, we consider the problem of solving large and sparse linear systems of saddle point type stemming from optimization problems. The focus of the thesis is on iterative methods, and new preconditioning srategies are proposed, along with novel spectral estimtates for the matrices involved.
Resumo:
Information is nowadays a key resource: machine learning and data mining techniques have been developed to extract high-level information from great amounts of data. As most data comes in form of unstructured text in natural languages, research on text mining is currently very active and dealing with practical problems. Among these, text categorization deals with the automatic organization of large quantities of documents in priorly defined taxonomies of topic categories, possibly arranged in large hierarchies. In commonly proposed machine learning approaches, classifiers are automatically trained from pre-labeled documents: they can perform very accurate classification, but often require a consistent training set and notable computational effort. Methods for cross-domain text categorization have been proposed, allowing to leverage a set of labeled documents of one domain to classify those of another one. Most methods use advanced statistical techniques, usually involving tuning of parameters. A first contribution presented here is a method based on nearest centroid classification, where profiles of categories are generated from the known domain and then iteratively adapted to the unknown one. Despite being conceptually simple and having easily tuned parameters, this method achieves state-of-the-art accuracy in most benchmark datasets with fast running times. A second, deeper contribution involves the design of a domain-independent model to distinguish the degree and type of relatedness between arbitrary documents and topics, inferred from the different types of semantic relationships between respective representative words, identified by specific search algorithms. The application of this model is tested on both flat and hierarchical text categorization, where it potentially allows the efficient addition of new categories during classification. Results show that classification accuracy still requires improvements, but models generated from one domain are shown to be effectively able to be reused in a different one.
Resumo:
Diese Dissertation demonstriert und verbessert die Vorhersagekraft der Coupled-Cluster-Theorie im Hinblick auf die hochgenaue Berechnung von Moleküleigenschaften. Die Demonstration erfolgt mittels Extrapolations- und Additivitätstechniken in der Single-Referenz-Coupled-Cluster-Theorie, mit deren Hilfe die Existenz und Struktur von bisher unbekannten Molekülen mit schweren Hauptgruppenelementen vorhergesagt wird. Vor allem am Beispiel von cyclischem SiS_2, einem dreiatomigen Molekül mit 16 Valenzelektronen, wird deutlich, dass die Vorhersagekraft der Theorie sich heutzutage auf Augenhöhe mit dem Experiment befindet: Theoretische Überlegungen initiierten eine experimentelle Suche nach diesem Molekül, was schließlich zu dessen Detektion und Charakterisierung mittels Rotationsspektroskopie führte. Die Vorhersagekraft der Coupled-Cluster-Theorie wird verbessert, indem eine Multireferenz-Coupled-Cluster-Methode für die Berechnung von Spin-Bahn-Aufspaltungen erster Ordnung in 2^Pi-Zuständen entwickelt wird. Der Fokus hierbei liegt auf Mukherjee's Variante der Multireferenz-Coupled-Cluster-Theorie, aber prinzipiell ist das vorgeschlagene Berechnungsschema auf alle Varianten anwendbar. Die erwünschte Genauigkeit beträgt 10 cm^-1. Sie wird mit der neuen Methode erreicht, wenn Ein- und Zweielektroneneffekte und bei schweren Elementen auch skalarrelativistische Effekte berücksichtigt werden. Die Methode eignet sich daher in Kombination mit Coupled-Cluster-basierten Extrapolations-und Additivitätsschemata dafür, hochgenaue thermochemische Daten zu berechnen.
Resumo:
The Standard Model of particle physics was developed to describe the fundamental particles, which form matter, and their interactions via the strong, electromagnetic and weak force. Although most measurements are described with high accuracy, some observations indicate that the Standard Model is incomplete. Numerous extensions were developed to solve these limitations. Several of these extensions predict heavy resonances, so-called Z' bosons, that can decay into an electron positron pair. The particle accelerator Large Hadron Collider (LHC) at CERN in Switzerland was built to collide protons at unprecedented center-of-mass energies, namely 7 TeV in 2011. With the data set recorded in 2011 by the ATLAS detector, a large multi-purpose detector located at the LHC, the electron positron pair mass spectrum was measured up to high masses in the TeV range. The properties of electrons and the probability that other particles are mis-identified as electrons were studied in detail. Using the obtained information, a sophisticated Standard Model expectation was derived with data-driven methods and Monte Carlo simulations. In the comparison of the measurement with the expectation, no significant deviations from the Standard Model expectations were observed. Therefore exclusion limits for several Standard Model extensions were calculated. For example, Sequential Standard Model (SSM) Z' bosons with masses below 2.10 TeV were excluded with 95% Confidence Level (C.L.).
Resumo:
Logistics involves planning, managing, and organizing the flows of goods from the point of origin to the point of destination in order to meet some requirements. Logistics and transportation aspects are very important and represent a relevant costs for producing and shipping companies, but also for public administration and private citizens. The optimization of resources and the improvement in the organization of operations is crucial for all branches of logistics, from the operation management to the transportation. As we will have the chance to see in this work, optimization techniques, models, and algorithms represent important methods to solve the always new and more complex problems arising in different segments of logistics. Many operation management and transportation problems are related to the optimization class of problems called Vehicle Routing Problems (VRPs). In this work, we consider several real-world deterministic and stochastic problems that are included in the wide class of the VRPs, and we solve them by means of exact and heuristic methods. We treat three classes of real-world routing and logistics problems. We deal with one of the most important tactical problems that arises in the managing of the bike sharing systems, that is the Bike sharing Rebalancing Problem (BRP). We propose models and algorithms for real-world earthwork optimization problems. We describe the 3DP process and we highlight several optimization issues in 3DP. Among those, we define the problem related to the tool path definition in the 3DP process, the 3D Routing Problem (3DRP), which is a generalization of the arc routing problem. We present an ILP model and several heuristic algorithms to solve the 3DRP.
Resumo:
The goal of this thesis is the acceleration of numerical calculations of QCD observables, both at leading order and next–to–leading order in the coupling constant. In particular, the optimization of helicity and spin summation in the context of VEGAS Monte Carlo algorithms is investigated. In the literature, two such methods are mentioned but without detailed analyses. Only one of these methods can be used at next–to–leading order. This work presents a total of five different methods that replace the helicity sums with a Monte Carlo integration. This integration can be combined with the existing phase space integral, in the hope that this causes less overhead than the complete summation. For three of these methods, an extension to existing subtraction terms is developed which is required to enable next–to–leading order calculations. All methods are analyzed with respect to efficiency, accuracy, and ease of implementation before they are compared with each other. In this process, one method shows clear advantages in relation to all others.
Resumo:
The focus of this thesis is to contribute to the development of new, exact solution approaches to different combinatorial optimization problems. In particular, we derive dedicated algorithms for a special class of Traveling Tournament Problems (TTPs), the Dial-A-Ride Problem (DARP), and the Vehicle Routing Problem with Time Windows and Temporal Synchronized Pickup and Delivery (VRPTWTSPD). Furthermore, we extend the concept of using dual-optimal inequalities for stabilized Column Generation (CG) and detail its application to improved CG algorithms for the cutting stock problem, the bin packing problem, the vertex coloring problem, and the bin packing problem with conflicts. In all approaches, we make use of some knowledge about the structure of the problem at hand to individualize and enhance existing algorithms. Specifically, we utilize knowledge about the input data (TTP), problem-specific constraints (DARP and VRPTWTSPD), and the dual solution space (stabilized CG). Extensive computational results proving the usefulness of the proposed methods are reported.
Resumo:
The research activities were focused on evaluating the effect of Mo addition to mechanical properties and microstructure of A354 aluminium casting alloy. Samples, with increasing amount of Mo, were produced and heat treated. After heat treatment and exposition to high temperatures samples underwent microstructural and chemical analyses, hardness and tensile tests. The collected data led to the optimization of both casting parameters, for obtaining a homogeneous Mo distribution in the alloy, and heat treatment parameters, allowing the formation of Mo based strengthening precipitates stable at high temperature. Microstructural and chemical analyses highlighted how Mo addition in percentage superior to 0.1% wt. can modify the silicon eutectic morphology and hinder the formation of iron based β intermetallics. High temperature exposure curves, instead, showed that after long exposition hardness is slightly influenced by heat treatment while the effect of Mo addition superior to 0,3% is negligible. Tensile tests confirmed that the addition of 0.3%wt Mo induces an increase of about 10% of ultimate tensile strength after high temperature exposition (250°C for 100h) while heat treatments have slight influence on mechanical behaviour. These results could be exploited for developing innovative heat treatment sequence able to reduce residual stresses in castings produced with A354 modified with Mo.