900 resultados para Hard combinatorial scheduling
Resumo:
X-ray photoemission spectroscopy (XPS) is one of the most universal and powerful tools for investigation of chemical states and electronic structures of materials. The application of hard x-rays increases the inelastic mean free path of the emitted electrons within the solid and thus makes hard x-ray photoelectron spectroscopy (HAXPES) a bulk sensitive probe for solid state research and especially a very effective nondestructive technique to study buried layers.rnThis thesis focuses on the investigation of multilayer structures, used in magnetic tunnel junctions (MTJs), by a number of techniques applying HAXPES. MTJs are the most important components of novel nanoscale devices employed in spintronics. rnThe investigation and deep understanding of the mechanisms responsible for the high performance of such devices and properties of employed magnetic materials that are, in turn, defined by their electronic structure becomes feasible applying HAXPES. Thus the process of B diffusion in CoFeB-based MTJs was investigated with respect to the annealing temperature and its influence on the changes in the electronic structure of CoFeB electrodes that clarify the behaviour and huge TMR ratio values obtained in such devices. These results are presented in chapter 6. The results of investigation of the changes in the valence states of buried off-stoichiometric Co2MnSi electrodes were investigated with respect to the Mn content α and its influence on the observed TMR ratio are described in chapter 7.rnrnMagnetoelectronic properties such as exchange splitting in ferromagnetic materials as well as the macroscopic magnetic ordering can be studied by magnetic circular dichroism in photoemission (MCDAD). It is characterized by the appearance of an asymmetry in the photoemission spectra taken either from the magnetized sample with the reversal of the photon helicity or by reversal of magnetization direction of the sample when the photon helicity direction is fixed. Though recently it has been widely applied for the characterization of surfaces using low energy photons, the bulk properties have stayed inaccessible. Therefore in this work this method was integrated to HAXPES to provide an access to exploration of magnetic phenomena in the buried layers of the complex multilayer structures. Chapter 8 contains the results of the MCDAD measurements employing hard x-rays for exploration of magnetic properties of the common CoFe-based band-ferromagnets as well as half-metallic ferromagnet Co2FeAl-based MTJs.rnrnInasmuch as the magnetoresistive characteristics in spintronic devices are fully defined by the electron spins of ferromagnetic materials their direct measurements always attracted much attention but up to date have been limited by the surface sensitivity of the developed techniques. Chapter 9 presents the results on the successfully performed spin-resolved HAXPES experiment using a spin polarimeter of the SPLEED-type on a buried Co2FeAl0.5Si0.5 magnetic layer. The measurements prove that a spin polarization of about 50 % is retained during the transmission of the photoelectrons emitted from the Fe 2p3/2 state through a 3-nm-thick oxide capping layer.rn
Resumo:
Combinatorial Optimization is becoming ever more crucial, in these days. From natural sciences to economics, passing through urban centers administration and personnel management, methodologies and algorithms with a strong theoretical background and a consolidated real-word effectiveness is more and more requested, in order to find, quickly, good solutions to complex strategical problems. Resource optimization is, nowadays, a fundamental ground for building the basements of successful projects. From the theoretical point of view, Combinatorial Optimization rests on stable and strong foundations, that allow researchers to face ever more challenging problems. However, from the application point of view, it seems that the rate of theoretical developments cannot cope with that enjoyed by modern hardware technologies, especially with reference to the one of processors industry. In this work we propose new parallel algorithms, designed for exploiting the new parallel architectures available on the market. We found that, exposing the inherent parallelism of some resolution techniques (like Dynamic Programming), the computational benefits are remarkable, lowering the execution times by more than an order of magnitude, and allowing to address instances with dimensions not possible before. We approached four Combinatorial Optimization’s notable problems: Packing Problem, Vehicle Routing Problem, Single Source Shortest Path Problem and a Network Design problem. For each of these problems we propose a collection of effective parallel solution algorithms, either for solving the full problem (Guillotine Cuts and SSSPP) or for enhancing a fundamental part of the solution method (VRP and ND). We endorse our claim by presenting computational results for all problems, either on standard benchmarks from the literature or, when possible, on data from real-world applications, where speed-ups of one order of magnitude are usually attained, not uncommonly scaling up to 40 X factors.
Resumo:
In questa tesi sono stati apportati due importanti contributi nel campo degli acceleratori embedded many-core. Abbiamo implementato un runtime OpenMP ottimizzato per la gestione del tasking model per sistemi a processori strettamente accoppiati in cluster e poi interconnessi attraverso una network on chip. Ci siamo focalizzati sulla loro scalabilità e sul supporto di task di granularità fine, come è tipico nelle applicazioni embedded. Il secondo contributo di questa tesi è stata proporre una estensione del runtime di OpenMP che cerca di prevedere la manifestazione di errori dati da fenomeni di variability tramite una schedulazione efficiente del carico di lavoro.
Resumo:
Im Bereich sicherheitsrelevanter eingebetteter Systeme stellt sich der Designprozess von Anwendungen als sehr komplex dar. Entsprechend einer gegebenen Hardwarearchitektur lassen sich Steuergeräte aufrüsten, um alle bestehenden Prozesse und Signale pünktlich auszuführen. Die zeitlichen Anforderungen sind strikt und müssen in jeder periodischen Wiederkehr der Prozesse erfüllt sein, da die Sicherstellung der parallelen Ausführung von größter Bedeutung ist. Existierende Ansätze können schnell Designalternativen berechnen, aber sie gewährleisten nicht, dass die Kosten für die nötigen Hardwareänderungen minimal sind. Wir stellen einen Ansatz vor, der kostenminimale Lösungen für das Problem berechnet, die alle zeitlichen Bedingungen erfüllen. Unser Algorithmus verwendet Lineare Programmierung mit Spaltengenerierung, eingebettet in eine Baumstruktur, um untere und obere Schranken während des Optimierungsprozesses bereitzustellen. Die komplexen Randbedingungen zur Gewährleistung der periodischen Ausführung verlagern sich durch eine Zerlegung des Hauptproblems in unabhängige Unterprobleme, die als ganzzahlige lineare Programme formuliert sind. Sowohl die Analysen zur Prozessausführung als auch die Methoden zur Signalübertragung werden untersucht und linearisierte Darstellungen angegeben. Des Weiteren präsentieren wir eine neue Formulierung für die Ausführung mit fixierten Prioritäten, die zusätzlich Prozessantwortzeiten im schlimmsten anzunehmenden Fall berechnet, welche für Szenarien nötig sind, in denen zeitliche Bedingungen an Teilmengen von Prozessen und Signalen gegeben sind. Wir weisen die Anwendbarkeit unserer Methoden durch die Analyse von Instanzen nach, welche Prozessstrukturen aus realen Anwendungen enthalten. Unsere Ergebnisse zeigen, dass untere Schranken schnell berechnet werden können, um die Optimalität von heuristischen Lösungen zu beweisen. Wenn wir optimale Lösungen mit Antwortzeiten liefern, stellt sich unsere neue Formulierung in der Laufzeitanalyse vorteilhaft gegenüber anderen Ansätzen dar. Die besten Resultate werden mit einem hybriden Ansatz erzielt, der heuristische Startlösungen, eine Vorverarbeitung und eine heuristische mit einer kurzen nachfolgenden exakten Berechnungsphase verbindet.
Resumo:
The focus of this thesis is to contribute to the development of new, exact solution approaches to different combinatorial optimization problems. In particular, we derive dedicated algorithms for a special class of Traveling Tournament Problems (TTPs), the Dial-A-Ride Problem (DARP), and the Vehicle Routing Problem with Time Windows and Temporal Synchronized Pickup and Delivery (VRPTWTSPD). Furthermore, we extend the concept of using dual-optimal inequalities for stabilized Column Generation (CG) and detail its application to improved CG algorithms for the cutting stock problem, the bin packing problem, the vertex coloring problem, and the bin packing problem with conflicts. In all approaches, we make use of some knowledge about the structure of the problem at hand to individualize and enhance existing algorithms. Specifically, we utilize knowledge about the input data (TTP), problem-specific constraints (DARP and VRPTWTSPD), and the dual solution space (stabilized CG). Extensive computational results proving the usefulness of the proposed methods are reported.
Resumo:
La tesi tratta lo studio del sistema QNX e dello sviluppo di un simulatore di task hard/soft real-time, tramite uso di un meta-scheduler. Al termine dello sviluppo vengono valutate le prestazioni del sistema operativo QNX Neutrino.
Resumo:
Nuovi elastomeri termoplastici "soft-hard" a base di PBS per applicazioni biomedicali
Resumo:
il ruolo della tesi è stato di valorizzare attraverso la valutazione di un peso l'urgenza e la necessità di cure dei pazienti processati da un modello di ottimizzazione. Essa si inserisce all'interno di un progetto di creazione di tale modello per la schedulazione di interventi in un reparto chirurgico di un generico ospedale.si è fatto uso del software ibm opl optimization suite.
Resumo:
In questa tesi presentiamo una strategia, e la relativa implementazione, per il problema dell’allocazione e schedulazione, su risorse unarie, di applicazioni multi-task periodiche, composte da attività che interagiscono fra loro e la cui durata è incerta. Lo scopo che ci si propone di raggiungere, è l’implementazione di una strategia di allocazione schedulazione che garantisca robustezza ed efficienza, in quei contesti in cui la conoscenza a priori è limitata e in cui le applicazioni si ripetono indefinitamente nel tempo. Per raggiungere questo scopo, sarà usato un approccio ibrido fra statico e dinamico. Staticamente è generata una soluzione del problema, sfruttando la programmazione a vincoli, in cui le durate delle attività sono arbitrariamente fissate. Questa soluzione, non rappresenta la soluzione del nostro problema, ma è utilizzata per generare un ordinamento delle attività, che compongono le applicazioni periodiche. Dinamicamente, sfruttando l’ordinamento ottenuto, è effettuata l’allocazione e la schedulazione effettiva delle applicazioni periodiche, considerando durate variabili per le attività. L’efficienza ottenuta applicando il nostro approccio è valutata effettuando test su una vasta gamma di istanze, sia industriali, sia sintetiche appositamente generate. I risultati sono confrontati con quelli ottenuti, per le stesse istanze, applicando un approccio puramente statico. Come si vedrà, in alcuni casi, è possibile anche quadruplicale la velocità di completamento delle applicazioni trattate.
Resumo:
High Performance Computing e una tecnologia usata dai cluster computazionali per creare sistemi di elaborazione che sono in grado di fornire servizi molto piu potenti rispetto ai computer tradizionali. Di conseguenza la tecnologia HPC e diventata un fattore determinante nella competizione industriale e nella ricerca. I sistemi HPC continuano a crescere in termini di nodi e core. Le previsioni indicano che il numero dei nodi arrivera a un milione a breve. Questo tipo di architettura presenta anche dei costi molto alti in termini del consumo delle risorse, che diventano insostenibili per il mercato industriale. Un scheduler centralizzato non e in grado di gestire un numero di risorse cosi alto, mantenendo un tempo di risposta ragionevole. In questa tesi viene presentato un modello di scheduling distribuito che si basa sulla programmazione a vincoli e che modella il problema dello scheduling grazie a una serie di vincoli temporali e vincoli sulle risorse che devono essere soddisfatti. Lo scheduler cerca di ottimizzare le performance delle risorse e tende ad avvicinarsi a un profilo di consumo desiderato, considerato ottimale. Vengono analizzati vari modelli diversi e ognuno di questi viene testato in vari ambienti.
Resumo:
Objective: To compare the soft and hard tissue healing and remodeling around tissue-level implants with different neck configurations after at least 1 year of functional loading. Material and methods: Eighteen patients with multiple missing teeth in the posterior area received two implants inserted in the same sextant. One test (T) implant with a 1.8 mm turned neck and one control (C) implant with a 2.8 mm turned neck were randomly assigned. All implants were placed transmucosally to the same sink depth of approximately 1.8 mm. Peri-apical radiographs were obtained using the paralleling technique and digitized. Two investigators blinded to the implant type-evaluated soft and hard tissue conditions at baseline, 6 months and 1 year after loading. Results: The mean crestal bone levels and soft tissue parameters were not significantly different between T and C implants at all time points. However, T implants displayed significantly less crestal bone loss than C implants after 1 year. Moreover, a frequency analysis revealed a higher percentage (50%) of T implants with crestal bone levels 1–2 mm below the implant shoulder compared with C implants (5.6%) 1 year after loading. Conclusion: Implants with a reduced height turned neck of 1.8 mm may, indeed, lower the crestal bone resorption and hence, may maintain higher crestal bone levels than do implants with a 2.8 mm turned neck, when sunk to the same depth. Moreover, several factors other than the vertical positioning of the moderately rough SLA surface may influence crestal bone levels after 1 year of function.