944 resultados para State-space modeling
Resumo:
The kinematics is a fundamental tool to infer the dynamical structure of galaxies and to understand their formation and evolution. Spectroscopic observations of gas emission lines are often used to derive rotation curves and velocity dispersions. It is however difficult to disentangle these two quantities in low spatial-resolution data because of beam smearing. In this thesis, we present 3D-Barolo, a new software to derive the gas kinematics of disk galaxies from emission-line data-cubes. The code builds tilted-ring models in the 3D observational space and compares them with the actual data-cubes. 3D-Barolo works with data at a wide range of spatial resolutions without being affected by instrumental biases. We use 3D-Barolo to derive rotation curves and velocity dispersions of several galaxies in both the local and the high-redshift Universe. We run our code on HI observations of nearby galaxies and we compare our results with 2D traditional approaches. We show that a 3D approach to the derivation of the gas kinematics has to be preferred to a 2D approach whenever a galaxy is resolved with less than about 20 elements across the disk. We moreover analyze a sample of galaxies at z~1, observed in the H-alpha line with the KMOS/VLT spectrograph. Our 3D modeling reveals that the kinematics of these high-z systems is comparable to that of local disk galaxies, with steeply-rising rotation curves followed by a flat part and H-alpha velocity dispersions of 15-40 km/s over the whole disks. This evidence suggests that disk galaxies were already fully settled about 7-8 billion years ago. In summary, 3D-Barolo is a powerful and robust tool to separate physical and instrumental effects and to derive a reliable kinematics. The analysis of large samples of galaxies at different redshifts with 3D-Barolo will provide new insights on how galaxies assemble and evolve throughout cosmic time.
Resumo:
In der Erdöl– und Gasindustrie sind bildgebende Verfahren und Simulationen auf der Porenskala im Begriff Routineanwendungen zu werden. Ihr weiteres Potential lässt sich im Umweltbereich anwenden, wie z.B. für den Transport und Verbleib von Schadstoffen im Untergrund, die Speicherung von Kohlendioxid und dem natürlichen Abbau von Schadstoffen in Böden. Mit der Röntgen-Computertomografie (XCT) steht ein zerstörungsfreies 3D bildgebendes Verfahren zur Verfügung, das auch häufig für die Untersuchung der internen Struktur geologischer Proben herangezogen wird. Das erste Ziel dieser Dissertation war die Implementierung einer Bildverarbeitungstechnik, die die Strahlenaufhärtung der Röntgen-Computertomografie beseitigt und den Segmentierungsprozess dessen Daten vereinfacht. Das zweite Ziel dieser Arbeit untersuchte die kombinierten Effekte von Porenraumcharakteristika, Porentortuosität, sowie die Strömungssimulation und Transportmodellierung in Porenräumen mit der Gitter-Boltzmann-Methode. In einer zylindrischen geologischen Probe war die Position jeder Phase auf Grundlage der Beobachtung durch das Vorhandensein der Strahlenaufhärtung in den rekonstruierten Bildern, das eine radiale Funktion vom Probenrand zum Zentrum darstellt, extrahierbar und die unterschiedlichen Phasen ließen sich automatisch segmentieren. Weiterhin wurden Strahlungsaufhärtungeffekte von beliebig geformten Objekten durch einen Oberflächenanpassungsalgorithmus korrigiert. Die Methode der „least square support vector machine” (LSSVM) ist durch einen modularen Aufbau charakterisiert und ist sehr gut für die Erkennung und Klassifizierung von Mustern geeignet. Aus diesem Grund wurde die Methode der LSSVM als pixelbasierte Klassifikationsmethode implementiert. Dieser Algorithmus ist in der Lage komplexe geologische Proben korrekt zu klassifizieren, benötigt für den Fall aber längere Rechenzeiten, so dass mehrdimensionale Trainingsdatensätze verwendet werden müssen. Die Dynamik von den unmischbaren Phasen Luft und Wasser wird durch eine Kombination von Porenmorphologie und Gitter Boltzmann Methode für Drainage und Imbibition Prozessen in 3D Datensätzen von Böden, die durch synchrotron-basierte XCT gewonnen wurden, untersucht. Obwohl die Porenmorphologie eine einfache Methode ist Kugeln in den verfügbaren Porenraum einzupassen, kann sie dennoch die komplexe kapillare Hysterese als eine Funktion der Wassersättigung erklären. Eine Hysterese ist für den Kapillardruck und die hydraulische Leitfähigkeit beobachtet worden, welche durch die hauptsächlich verbundenen Porennetzwerke und der verfügbaren Porenraumgrößenverteilung verursacht sind. Die hydraulische Konduktivität ist eine Funktion des Wassersättigungslevels und wird mit einer makroskopischen Berechnung empirischer Modelle verglichen. Die Daten stimmen vor allem für hohe Wassersättigungen gut überein. Um die Gegenwart von Krankheitserregern im Grundwasser und Abwässern vorhersagen zu können, wurde in einem Bodenaggregat der Einfluss von Korngröße, Porengeometrie und Fluidflussgeschwindigkeit z.B. mit dem Mikroorganismus Escherichia coli studiert. Die asymmetrischen und langschweifigen Durchbruchskurven, besonders bei höheren Wassersättigungen, wurden durch dispersiven Transport aufgrund des verbundenen Porennetzwerks und durch die Heterogenität des Strömungsfeldes verursacht. Es wurde beobachtet, dass die biokolloidale Verweilzeit eine Funktion des Druckgradienten als auch der Kolloidgröße ist. Unsere Modellierungsergebnisse stimmen sehr gut mit den bereits veröffentlichten Daten überein.
Resumo:
The dissertation entitled "Tuning of magnetic exchange interactions between organic radicals through bond and space" comprises eight chapters. In the initial part of chapter 1, an overview of organic radicals and their applications were discussed and in the latter part motivation and objective of thesis was described. As the EPR spectroscopy is a necessary tool to study organic radicals, the basic principles of EPR spectroscopy were discussed in chapter 2. rnAntiferromagnetically coupled species can be considered as a source of interacting bosons. Consequently, such biradicals can serve as molecular models of a gas of magnetic excitations which can be used for quantum computing or quantum information processing. Notably, initial small triplet state population in weakly AF coupled biradicals can be switched into larger in the presence of applied magnetic field. Such biradical systems are promising molecular models for studying the phenomena of magnetic field-induced Bose-Einstein condensation in the solid state. To observe such phenomena it is very important to control the intra- as well as inter-molecular magnetic exchange interactions. Chapters 3 to 5 deals with the tuning of intra- and inter-molecular exchange interactions utilizing different approaches. Some of which include changing the length of π-spacer, introduction of functional groups, metal complex formation with diamagnetic metal ion, variation of radical moieties etc. During this study I came across two very interesting molecules 2,7-TMPNO and BPNO, which exist in semi-quinoid form and exhibits characteristic of the biradical and quinoid form simultaneously. The 2,7-TMPNO possesses the singlet-triplet energy gap of ΔEST = –1185 K. So it is nearly unrealistic to observe the magnetic field induced spin switching. So we studied the spin switching of this molecule by photo-excitation which was discussed in chapter 6. The structural similarity of BPNO with Tschitschibabin’s HC allowed us to dig the discrepancies related to ground state of Tschitschibabin’s hydrocarbon(Discussed in chapter 7). Finally, in chapter 8 the synthesis and characterization of a neutral paramagnetic HBC derivative (HBCNO) is discussed. The magneto liquid crystalline properties of HBCNO were studied by DSC and EPR spectroscopy.rn
Resumo:
This thesis deals with the investigation of charge generation and recombination processes in three different polymer:fullerene photovoltaic blends by means of ultrafast time-resolved optical spectroscopy. The first donor polymer, namely poly[N-11"-henicosanyl-2,7-carbazole-alt-5,5-(4',7'-di-2-thienyl-2',1',3'-benzothiadiazole)] (PCDTBT), is a mid-bandgap polymer, the other two materials are the low-bandgap donor polymers poly[2,6-(4,4-bis-(2-ethylhexyl)-4H-cyclopenta[2,1-b;3,4-b']-dithiophene)-alt-4,7-(2,1,3-benzothiadiazole) (PCPDTBT) and poly[(4,4'-bis(2-ethylhexyl)dithieno[3,2-b:2',3'-d]silole)-2,6-diyl-alt-(2,1,3-benzothiadiazole)-4,7-diyl] (PSBTBT). Despite their broader absorption, the low-bandgap polymers do not show enhanced photovoltaic efficiencies compared to the mid-bandgap system.rnrnTransient absorption spectroscopy revealed that energetic disorder plays an important role in the photophysics of PCDTBT, and that in a blend with PCBM geminate losses are small. The photophysics of the low-bandgap system PCPDTBT were strongly altered by adding a high boiling point cosolvent to the polymer:fullerene blend due to a partial demixing of the materials. We observed an increase in device performance together with a reduction of geminate recombination upon addition of the cosolvent. By applying model-free multi-variate curve resolution to the spectroscopic data, we found that fast non-geminate recombination due to polymer triplet state formation is a limiting loss channel in the low-bandgap material system PCPDTBT, whereas in PSBTBT triplet formation has a smaller impact on device performance, and thus higher efficiencies are obtained.rn
Resumo:
In questa tesi sono state applicate le tecniche del gruppo di rinormalizzazione funzionale allo studio della teoria quantistica di campo scalare con simmetria O(N) sia in uno spaziotempo piatto (Euclideo) che nel caso di accoppiamento ad un campo gravitazionale nel paradigma dell'asymptotic safety. Nel primo capitolo vengono esposti in breve alcuni concetti basilari della teoria dei campi in uno spazio euclideo a dimensione arbitraria. Nel secondo capitolo si discute estensivamente il metodo di rinormalizzazione funzionale ideato da Wetterich e si fornisce un primo semplice esempio di applicazione, il modello scalare. Nel terzo capitolo è stato studiato in dettaglio il modello O(N) in uno spaziotempo piatto, ricavando analiticamente le equazioni di evoluzione delle quantità rilevanti del modello. Quindi ci si è specializzati sul caso N infinito. Nel quarto capitolo viene iniziata l'analisi delle equazioni di punto fisso nel limite N infinito, a partire dal caso di dimensione anomala nulla e rinormalizzazione della funzione d'onda costante (approssimazione LPA), già studiato in letteratura. Viene poi considerato il caso NLO nella derivative expansion. Nel quinto capitolo si è introdotto l'accoppiamento non minimale con un campo gravitazionale, la cui natura quantistica è considerata a livello di QFT secondo il paradigma di rinormalizzabilità dell'asymptotic safety. Per questo modello si sono ricavate le equazioni di punto fisso per le principali osservabili e se ne è studiato il comportamento per diversi valori di N.
Resumo:
Un noto centro di ricerca europea ha recentemente modificato un jet convenzionale di classe CS-25 in una piattaforma scientifica. Durante il processo di certificazione delle modifiche, l’impatto delle stesse sulle prestazioni è stato studiato in modo esaustivo. Per lo studio delle qualità di volo, i piloti collaudatori hanno sviluppato una procedura di certificazione ad hoc che consiste in test qualitativi separati della stabilità longitudinale, laterale e direzionale. L’obiettivo della tesi è analizzare i dati di volo, registrati durante i test di collaudo, con l'obiettivo di estrarre informazioni di carattere quantitativo circa la stabilità longitudinale del velivolo modificato. In primo luogo sono state analizzate tre diverse modifiche apportate all’aeromobile e successivamente i risultati sono stati messi a confronto per capirne l’influenza sulle qualità di volo dell’aeromobile. Le derivate aerodinamiche sono state stimate utilizzando la cosiddetta “identificazione dei parametri”, che mira a replicare le variabili registrate durante i test di volo, variando un dato insieme di coefficienti all’interno del modello linearizzato della dinamica dell’aeromobile. L'identificazione del modo di corto periodo ha consentito l'estrazione dei suoi parametri caratteristici, quali il rapporto di smorzamento e la frequenza naturale. La procedura ha consentito inoltre di calcolare il cosiddetto “Control Anticipation Parameter” (CAP), parametro caratterizzante delle qualità di volo di un aeroplano. I risultati ottenuti sono stati messi a confronto con i requisiti prescritti dalla normativa MIL-STD-1797-A, risultando conformi al livello più alto di qualità di volo.
Resumo:
L’obiettivo del lavoro esposto nella seguente relazione di tesi ha riguardato lo studio e la simulazione di esperimenti di radar bistatico per missioni di esplorazione planeteria. In particolare, il lavoro si è concentrato sull’uso ed il miglioramento di un simulatore software già realizzato da un consorzio di aziende ed enti di ricerca nell’ambito di uno studio dell’Agenzia Spaziale Europea (European Space Agency – ESA) finanziato nel 2008, e svolto fra il 2009 e 2010. L’azienda spagnola GMV ha coordinato lo studio, al quale presero parte anche gruppi di ricerca dell’Università di Roma “Sapienza” e dell’Università di Bologna. Il lavoro svolto si è incentrato sulla determinazione della causa di alcune inconsistenze negli output relativi alla parte del simulatore, progettato in ambiente MATLAB, finalizzato alla stima delle caratteristiche della superficie di Titano, in particolare la costante dielettrica e la rugosità media della superficie, mediante un esperimento con radar bistatico in modalità downlink eseguito dalla sonda Cassini-Huygens in orbita intorno al Titano stesso. Esperimenti con radar bistatico per lo studio di corpi celesti sono presenti nella storia dell’esplorazione spaziale fin dagli anni ’60, anche se ogni volta le apparecchiature utilizzate e le fasi di missione, durante le quali questi esperimenti erano effettuati, non sono state mai appositamente progettate per lo scopo. Da qui la necessità di progettare un simulatore per studiare varie possibili modalità di esperimenti con radar bistatico in diversi tipi di missione. In una prima fase di approccio al simulatore, il lavoro si è incentrato sullo studio della documentazione in allegato al codice così da avere un’idea generale della sua struttura e funzionamento. È seguita poi una fase di studio dettagliato, determinando lo scopo di ogni linea di codice utilizzata, nonché la verifica in letteratura delle formule e dei modelli utilizzati per la determinazione di diversi parametri. In una seconda fase il lavoro ha previsto l’intervento diretto sul codice con una serie di indagini volte a determinarne la coerenza e l’attendibilità dei risultati. Ogni indagine ha previsto una diminuzione delle ipotesi semplificative imposte al modello utilizzato in modo tale da identificare con maggiore sicurezza la parte del codice responsabile dell’inesattezza degli output del simulatore. I risultati ottenuti hanno permesso la correzione di alcune parti del codice e la determinazione della principale fonte di errore sugli output, circoscrivendo l’oggetto di studio per future indagini mirate.
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.
Resumo:
Comparison of the crystal structure of a transition state analogue that was used to raise catalytic antibodies for the benzoyl ester hydrolysis of cocaine with structures calculated by ab initio, semiempirical, and solvation semiempirical methods reveals that modeling of solvation is crucial for replicating the crystal structure geometry. Both SM3 and SM2 calculations, starting from the crystal structure TSA I, converged on structures similar to the crystal structure. The 3-21G(*)/HF, 6-31G*/HF, PM3, and AM1 calculations converged on structures similar to each other, but these gas-phase structures were significantly extended relative to the condensed phase structures. Two transition states for the hydrolysis of the benzoyl ester of cocaine were located with the SM3 method. The gas phase calculations failed to locate reasonable transition state structures for this reaction. These results imply that accurate modeling of the potential energy surfaces for the hydrolysis of cocaine requires solvation methods.
Resumo:
Over the past 7 years, the enediyne anticancer antibiotics have been widely studied due to their DNA cleaving ability. The focus of these antibiotics, represented by kedarcidin chromophore, neocarzinostatin chromophore, calicheamicin, esperamicin A, and dynemicin A, is on the enediyne moiety contained within each of these antibiotics. In its inactive form, the moiety is benign to its environment. Upon suitable activation, the system undergoes a Bergman cycloaromatization proceeding through a 1,4-dehydrobenzene diradical intermediate. It is this diradical intermediate that is thought to cleave double-stranded dna through hydrogen atom abstraction. Semiempirical, semiempiricalci, Hartree–Fock ab initio, and mp2 electron correlation methods have been used to investigate the inactive hex-3-ene-1,5-diyne reactant, the 1,4-dehydrobenzene diradical, and a transition state structure of the Bergman reaction. Geometries calculated with different basis sets and by semiempirical methods have been used for single-point calculations using electron correlation methods. These results are compared with the best experimental and theoretical results reported in the literature. Implications of these results for computational studies of the enediyne anticancer antibiotics are discussed.
Resumo:
The hERG voltage-gated potassium channel mediates the cardiac I(Kr) current, which is crucial for the duration of the cardiac action potential. Undesired block of the channel by certain drugs may prolong the QT interval and increase the risk of malignant ventricular arrhythmias. Although the molecular determinants of hERG block have been intensively studied, not much is known about its stereoselectivity. Levo-(S)-bupivacaine was the first drug reported to have a higher affinity to block hERG than its enantiomer. This study strives to understand the principles underlying the stereoselectivity of bupivacaine block with the help of mutagenesis analyses and molecular modeling simulations. Electrophysiological measurements of mutated hERG channels allowed for the identification of residues involved in bupivacaine binding and stereoselectivity. Docking and molecular mechanics simulations for both enantiomers of bupivacaine and terfenadine (a non-stereoselective blocker) were performed inside an open-state model of the hERG channel. The predicted binding modes enabled a clear depiction of ligand-protein interactions. Estimated binding affinities for both enantiomers were consistent with electrophysiological measurements. A similar computational procedure was applied to bupivacaine enantiomers towards two mutated hERG channels (Tyr652Ala and Phe656Ala). This study confirmed, at the molecular level, that bupivacaine stereoselectively binds the hERG channel. These results help to lay the foundation for structural guidelines to optimize the cardiotoxic profile of drug candidates in silico.
Resumo:
We present a new approach for corpus-based speech enhancement that significantly improves over a method published by Xiao and Nickel in 2010. Corpus-based enhancement systems do not merely filter an incoming noisy signal, but resynthesize its speech content via an inventory of pre-recorded clean signals. The goal of the procedure is to perceptually improve the sound of speech signals in background noise. The proposed new method modifies Xiao's method in four significant ways. Firstly, it employs a Gaussian mixture model (GMM) instead of a vector quantizer in the phoneme recognition front-end. Secondly, the state decoding of the recognition stage is supported with an uncertainty modeling technique. With the GMM and the uncertainty modeling it is possible to eliminate the need for noise dependent system training. Thirdly, the post-processing of the original method via sinusoidal modeling is replaced with a powerful cepstral smoothing operation. And lastly, due to the improvements of these modifications, it is possible to extend the operational bandwidth of the procedure from 4 kHz to 8 kHz. The performance of the proposed method was evaluated across different noise types and different signal-to-noise ratios. The new method was able to significantly outperform traditional methods, including the one by Xiao and Nickel, in terms of PESQ scores and other objective quality measures. Results of subjective CMOS tests over a smaller set of test samples support our claims.
Resumo:
The US penitentiary at Lewisburg, Pennsylvania, was retrofitted in 2008 to offer the country’s first federal Special Management Unit (SMU) program of its kind. This model SMU is designed for federal inmates from around the country identified as the most intractably troublesome, and features double-celling of inmates in tiny spaces, in 23-hour or 24-hour a day lockdown, requiring them to pass through a two-year program of readjustment. These spatial tactics, and the philosophy of punishment underlying them, contrast with the modern reform ideals upon which the prison was designed and built in 1932. The SMU represents the latest punitive phase in American penology, one that neither simply eliminates men as in the premodern spectacle, nor creates the docile, rehabilitated bodies of the modern panopticon; rather, it is a late-modern structure that produces only fear, terror, violence, and death. This SMU represents the latest of the late-modern prisons, similar to other supermax facilities in the US but offering its own unique system of punishment as well. While the prison exists within the system of American law and jurisprudence, it also manifests features of Agamben’s lawless, camp-like space that emerges during a state of exception, exempt from outside scrutiny with inmate treatment typically beyond the scope of the law.
Resumo:
The US penitentiary at Lewisburg, Pennsylvania, was retrofitted in 2008 to offer the country's first federal Special Management Unit (SMU) program of its kind. This model SMU is designed for federal inmates from around the country identified as the most intractably troublesome, and features double-celling of inmates in tiny spaces, in 23-hour or 24-hour a day lockdown, requiring them to pass through a two-year program of readjustment. These spatial tactics, and the philosophy of punishment underlying them, contrast with the modern reform ideals upon which the prison was designed and built in 1932. The SMU represents the latest punitive phase in American penology, one that neither simply eliminates men as in the premodern spectacle, nor creates the docile, rehabilitated bodies of the modern panopticon; rather, it is a late-modern structure that produces only fear, terror, violence, and death. This SMU represents the latest of the late-modern prisons, similar to other supermax facilities in the US but offering its own unique system of punishment as well. While the prison exists within the system of American law and jurisprudence, it also manifests features of Agamben's lawless, camp-like space that emerges during a state of exception, exempt from outside scrutiny with inmate treatment typically beyond the scope of the law