938 resultados para Columns.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Negli ultimi anni, un crescente numero di studiosi ha focalizzato la propria attenzione sullo sviluppo di strategie che permettessero di caratterizzare le proprietà ADMET dei farmaci in via di sviluppo, il più rapidamente possibile. Questa tendenza origina dalla consapevolezza che circa la metà dei farmaci in via di sviluppo non viene commercializzato perché ha carenze nelle caratteristiche ADME, e che almeno la metà delle molecole che riescono ad essere commercializzate, hanno comunque qualche problema tossicologico o ADME [1]. Infatti, poco importa quanto una molecola possa essere attiva o specifica: perché possa diventare farmaco è necessario che venga ben assorbita, distribuita nell’organismo, metabolizzata non troppo rapidamente, ne troppo lentamente e completamente eliminata. Inoltre la molecola e i suoi metaboliti non dovrebbero essere tossici per l’organismo. Quindi è chiaro come una rapida determinazione dei parametri ADMET in fasi precoci dello sviluppo del farmaco, consenta di risparmiare tempo e denaro, permettendo di selezionare da subito i composti più promettenti e di lasciar perdere quelli con caratteristiche negative. Questa tesi si colloca in questo contesto, e mostra l’applicazione di una tecnica semplice, la biocromatografia, per caratterizzare rapidamente il legame di librerie di composti alla sieroalbumina umana (HSA). Inoltre mostra l’utilizzo di un’altra tecnica indipendente, il dicroismo circolare, che permette di studiare gli stessi sistemi farmaco-proteina, in soluzione, dando informazioni supplementari riguardo alla stereochimica del processo di legame. La HSA è la proteina più abbondante presente nel sangue. Questa proteina funziona da carrier per un gran numero di molecole, sia endogene, come ad esempio bilirubina, tiroxina, ormoni steroidei, acidi grassi, che xenobiotici. Inoltre aumenta la solubilità di molecole lipofile poco solubili in ambiente acquoso, come ad esempio i tassani. Il legame alla HSA è generalmente stereoselettivo e ad avviene a livello di siti di legame ad alta affinità. Inoltre è ben noto che la competizione tra farmaci o tra un farmaco e metaboliti endogeni, possa variare in maniera significativa la loro frazione libera, modificandone l’attività e la tossicità. Per queste sue proprietà la HSA può influenzare sia le proprietà farmacocinetiche che farmacodinamiche dei farmaci. Non è inusuale che un intero progetto di sviluppo di un farmaco possa venire abbandonato a causa di un’affinità troppo elevata alla HSA, o a un tempo di emivita troppo corto, o a una scarsa distribuzione dovuta ad un debole legame alla HSA. Dal punto di vista farmacocinetico, quindi, la HSA è la proteina di trasporto del plasma più importante. Un gran numero di pubblicazioni dimostra l’affidabilità della tecnica biocromatografica nello studio dei fenomeni di bioriconoscimento tra proteine e piccole molecole [2-6]. Il mio lavoro si è focalizzato principalmente sull’uso della biocromatografia come metodo per valutare le caratteristiche di legame di alcune serie di composti di interesse farmaceutico alla HSA, e sul miglioramento di tale tecnica. Per ottenere una miglior comprensione dei meccanismi di legame delle molecole studiate, gli stessi sistemi farmaco-HSA sono stati studiati anche con il dicroismo circolare (CD). Inizialmente, la HSA è stata immobilizzata su una colonna di silice epossidica impaccata 50 x 4.6 mm di diametro interno, utilizzando una procedura precedentemente riportata in letteratura [7], con alcune piccole modifiche. In breve, l’immobilizzazione è stata effettuata ponendo a ricircolo, attraverso una colonna precedentemente impaccata, una soluzione di HSA in determinate condizioni di pH e forza ionica. La colonna è stata quindi caratterizzata per quanto riguarda la quantità di proteina correttamente immobilizzata, attraverso l’analisi frontale di L-triptofano [8]. Di seguito, sono stati iniettati in colonna alcune soluzioni raceme di molecole note legare la HSA in maniera enantioselettiva, per controllare che la procedura di immobilizzazione non avesse modificato le proprietà di legame della proteina. Dopo essere stata caratterizzata, la colonna è stata utilizzata per determinare la percentuale di legame di una piccola serie di inibitori della proteasi HIV (IPs), e per individuarne il sito(i) di legame. La percentuale di legame è stata calcolata attraverso il fattore di capacità (k) dei campioni. Questo parametro in fase acquosa è stato estrapolato linearmente dal grafico log k contro la percentuale (v/v) di 1-propanolo presente nella fase mobile. Solamente per due dei cinque composti analizzati è stato possibile misurare direttamente il valore di k in assenza di solvente organico. Tutti gli IPs analizzati hanno mostrato un’elevata percentuale di legame alla HSA: in particolare, il valore per ritonavir, lopinavir e saquinavir è risultato maggiore del 95%. Questi risultati sono in accordo con dati presenti in letteratura, ottenuti attraverso il biosensore ottico [9]. Inoltre, questi risultati sono coerenti con la significativa riduzione di attività inibitoria di questi composti osservata in presenza di HSA. Questa riduzione sembra essere maggiore per i composti che legano maggiormente la proteina [10]. Successivamente sono stati eseguiti degli studi di competizione tramite cromatografia zonale. Questo metodo prevede di utilizzare una soluzione a concentrazione nota di un competitore come fase mobile, mentre piccole quantità di analita vengono iniettate nella colonna funzionalizzata con HSA. I competitori sono stati selezionati in base al loro legame selettivo ad uno dei principali siti di legame sulla proteina. In particolare, sono stati utilizzati salicilato di sodio, ibuprofene e valproato di sodio come marker dei siti I, II e sito della bilirubina, rispettivamente. Questi studi hanno mostrato un legame indipendente dei PIs ai siti I e II, mentre è stata osservata una debole anticooperatività per il sito della bilirubina. Lo stesso sistema farmaco-proteina è stato infine investigato in soluzione attraverso l’uso del dicroismo circolare. In particolare, è stato monitorata la variazione del segnale CD indotto di un complesso equimolare [HSA]/[bilirubina], a seguito dell’aggiunta di aliquote di ritonavir, scelto come rappresentante della serie. I risultati confermano la lieve anticooperatività per il sito della bilirubina osservato precedentemente negli studi biocromatografici. Successivamente, lo stesso protocollo descritto precedentemente è stato applicato a una colonna di silice epossidica monolitica 50 x 4.6 mm, per valutare l’affidabilità del supporto monolitico per applicazioni biocromatografiche. Il supporto monolitico monolitico ha mostrato buone caratteristiche cromatografiche in termini di contropressione, efficienza e stabilità, oltre che affidabilità nella determinazione dei parametri di legame alla HSA. Questa colonna è stata utilizzata per la determinazione della percentuale di legame alla HSA di una serie di poliamminochinoni sviluppati nell’ambito di una ricerca sulla malattia di Alzheimer. Tutti i composti hanno mostrato una percentuale di legame superiore al 95%. Inoltre, è stata osservata una correlazione tra percentuale di legame è caratteristiche della catena laterale (lunghezza e numero di gruppi amminici). Successivamente sono stati effettuati studi di competizione dei composti in esame tramite il dicroismo circolare in cui è stato evidenziato un effetto anticooperativo dei poliamminochinoni ai siti I e II, mentre rispetto al sito della bilirubina il legame si è dimostrato indipendente. Le conoscenze acquisite con il supporto monolitico precedentemente descritto, sono state applicate a una colonna di silice epossidica più corta (10 x 4.6 mm). Il metodo di determinazione della percentuale di legame utilizzato negli studi precedenti si basa su dati ottenuti con più esperimenti, quindi è necessario molto tempo prima di ottenere il dato finale. L’uso di una colonna più corta permette di ridurre i tempi di ritenzione degli analiti, per cui la determinazione della percentuale di legame alla HSA diventa molto più rapida. Si passa quindi da una analisi a medio rendimento a una analisi di screening ad alto rendimento (highthroughput- screening, HTS). Inoltre, la riduzione dei tempi di analisi, permette di evitare l’uso di soventi organici nella fase mobile. Dopo aver caratterizzato la colonna da 10 mm con lo stesso metodo precedentemente descritto per le altre colonne, sono stati iniettati una serie di standard variando il flusso della fase mobile, per valutare la possibilità di utilizzare flussi elevati. La colonna è stata quindi impiegata per stimare la percentuale di legame di una serie di molecole con differenti caratteristiche chimiche. Successivamente è stata valutata la possibilità di utilizzare una colonna così corta, anche per studi di competizione, ed è stata indagato il legame di una serie di composti al sito I. Infine è stata effettuata una valutazione della stabilità della colonna in seguito ad un uso estensivo. L’uso di supporti cromatografici funzionalizzati con albumine di diversa origine (ratto, cane, guinea pig, hamster, topo, coniglio), può essere proposto come applicazione futura di queste colonne HTS. Infatti, la possibilità di ottenere informazioni del legame dei farmaci in via di sviluppo alle diverse albumine, permetterebbe un migliore paragone tra i dati ottenuti tramite esperimenti in vitro e i dati ottenuti con esperimenti sull’animale, facilitando la successiva estrapolazione all’uomo, con la velocità di un metodo HTS. Inoltre, verrebbe ridotto anche il numero di animali utilizzati nelle sperimentazioni. Alcuni lavori presenti in letteratura dimostrano l’affidabilita di colonne funzionalizzate con albumine di diversa origine [11-13]: l’utilizzo di colonne più corte potrebbe aumentarne le applicazioni.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Seyfert galaxies are the closest active galactic nuclei. As such, we can use them to test the physical properties of the entire class of objects. To investigate their general properties, I took advantage of different methods of data analysis. In particular I used three different samples of objects, that, despite frequent overlaps, have been chosen to best tackle different topics: the heterogeneous BeppoS AX sample was thought to be optimized to test the average hard X-ray (E above 10 keV) properties of nearby Seyfert galaxies; the X-CfA was thought the be optimized to compare the properties of low-luminosity sources to the ones of higher luminosity and, thus, it was also used to test the emission mechanism models; finally, the XMM–Newton sample was extracted from the X-CfA sample so as to ensure a truly unbiased and well defined sample of objects to define the average properties of Seyfert galaxies. Taking advantage of the broad-band coverage of the BeppoS AX MECS and PDS instruments (between ~2-100 keV), I infer the average X-ray spectral propertiesof nearby Seyfert galaxies and in particular the photon index (~1.8), the high-energy cut-off (~290 keV), and the relative amount of cold reflection (~1.0). Moreover the unified scheme for active galactic nuclei was positively tested. The distribution of isotropic indicators used here (photon index, relative amount of reflection, high-energy cut-off and narrow FeK energy centroid) are similar in type I and type II objects while the absorbing column and the iron line equivalent width significantly differ between the two classes of sources with type II objects displaying larger absorbing columns. Taking advantage of the XMM–Newton and X–CfA samples I also deduced from measurements that 30 to 50% of type II Seyfert galaxies are Compton thick. Confirming previous results, the narrow FeK line is consistent, in Seyfert 2 galaxies, with being produced in the same matter responsible for the observed obscuration. These results support the basic picture of the unified model. Moreover, the presence of a X-ray Baldwin effect in type I sources has been measured using for the first time the 20-100 keV luminosity (EW proportional to L(20-100)^(−0.22±0.05)). This finding suggests that the torus covering factor may be a function of source luminosity, thereby suggesting a refinement of the baseline version of the unifed model itself. Using the BeppoSAX sample, it has been also recorded a possible correlation between the photon index and the amount of cold reflection in both type I and II sources. At a first glance this confirms the thermal Comptonization as the most likely origin of the high energy emission for the active galactic nuclei. This relation, in fact, naturally emerges supposing that the accretion disk penetrates, depending to the accretion rate, the central corona at different depths (Merloni et al. 2006): the higher accreting systems hosting disks down to the last stable orbit while the lower accreting systems hosting truncated disks. On the contrary, the study of the well defined X–C f A sample of Seyfert galaxies has proved that the intrinsic X-ray luminosity of nearby Seyfert galaxies can span values between 10^(38−43) erg s^−1, i.e. covering a huge range of accretion rates. The less efficient systems have been supposed to host ADAF systems without accretion disk. However, the study of the X–CfA sample has also proved the existence of correlations between optical emission lines and X-ray luminosity in the entire range of L_(X) covered by the sample. These relations are similar to the ones obtained if high-L objects are considered. Thus the emission mechanism must be similar in luminous and weak systems. A possible scenario to reconcile these somehow opposite indications is assuming that the ADAF and the two phase mechanism co-exist with different relative importance moving from low-to-high accretion systems (as suggested by the Gamma vs. R relation). The present data require that no abrupt transition between the two regimes is present. As mentioned above, the possible presence of an accretion disk has been tested using samples of nearby Seyfert galaxies. Here, to deeply investigate the flow patterns close to super-massive black-holes, three case study objects for which enough counts statistics is available have been analysed using deep X-ray observations taken with XMM–Newton. The obtained results have shown that the accretion flow can significantly differ between the objects when it is analyzed with the appropriate detail. For instance the accretion disk is well established down to the last stable orbit in a Kerr system for IRAS 13197-1627 where strong light bending effect have been measured. The accretion disk seems to be formed spiraling in the inner ~10-30 gravitational radii in NGC 3783 where time dependent and recursive modulation have been measured both in the continuum emission and in the broad emission line component. Finally, the accretion disk seems to be only weakly detectable in rk 509, with its weak broad emission line component. Finally, blueshifted resonant absorption lines have been detected in all three objects. This seems to demonstrate that, around super-massive black-holes, there is matter which is not confined in the accretion disk and moves along the line of sight with velocities as large as v~0.01-0.4c (whre c is the speed of light). Wether this matter forms winds or blobs is still matter of debate together with the assessment of the real statistical significance of the measured absorption lines. Nonetheless, if confirmed, these phenomena are of outstanding interest because they offer new potential probes for the dynamics of the innermost regions of accretion flows, to tackle the formation of ejecta/jets and to place constraints on the rate of kinetic energy injected by AGNs into the ISM and IGM. Future high energy missions (such as the planned Simbol-X and IXO) will likely allow an exciting step forward in our understanding of the flow dynamics around black holes and the formation of the highest velocity outflows.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Poröse Medien spielen in der Hydrosphäre eine wesentliche Rolle bei der Strömung und beim Transport von Stoffen. In diesem Raum finden komplexe Prozesse statt: Advektion, Kon-vektion, Diffusion, hydromechanische Dispersion, Sorption, Komplexierung, Ionenaustausch und Abbau. Die strömungsmechanischen- und die Transportverhältnisse in porösen Medien werden direkt durch die Geometrie des Porenraumes selbst und durch die Eigenschaften der transportierten (oder strömenden) Medien bestimmt. In der Praxis wird eine Vielzahl von empirischen Modellen verwendet, die die Eigenschaften des porösen Mediums in repräsentativen Elementarvolumen wiedergeben. Die Ermittlung der in empirischen Modellen verwendeten Materialparameter erfolgt über Labor- oder Feldbestimmungsmethoden. Im Rahmen dieser Arbeit wurde das Computer-modell PoreFlow entwickelt, welches die hydraulischen Eigenschaften eines korngestützten porösen Mediums aus der mikroskopischen Modellierung des Fluidflusses und Transportes ableitet. Das poröse Modellmedium wird durch ein dreidimensionales Kugelpackungsmodell, zusam-mengesetzt aus einer beliebigen Kornverteilung, dargestellt. Im Modellporenraum wird die Strömung eines Fluids basierend auf einer stationären Lösung der Navier-Stokes-Gleichung simuliert. Die Ergebnisse der Modellsimulationen an verschiedenen Modellmedien werden mit den Ergebnissen von Säulenversuchen verglichen. Es zeigt sich eine deutliche Abhängigkeit der Strömungs- und Transportparameter von der Porenraumgeometrie sowohl in den Modell-simulationen als auch in den Säulenexperimenten.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work describes the synthesis of a new class of rod-coil block copolymers, oligosubstituted shape persistent macrocycles, (coil-ring-coil block copolymers), and their behavior in solution and in the solid state.The coil-ring-coil block copolymers are formed by nanometer sized shape persistent macrocycles based on the phenyl-ethynyl backbone as rigid block and oligomers of polystyrene or polydimethylsiloxane as flexible blocks. The strategy that has been followed is to synthesize the macrocycles with an alcoholic functionality and the polymer carboxylic acids independently, and then bind them together by esterification. The ester bond is stable and relatively easy to form.The synthesis of the shape persistent macrocycles is based on two separate steps. In the first step the building blocks of the macrocycles are connected by Hagiara-Sogonaschira coupling to form an 'half-ring' as precursor, that contains two free acetylenes. In the second step the half-ring is cyclized by forming two sp-sp bonds via a copper-catalyzed Glaser coupling under pseudo-high-dilution conditions. The polystyrene carboxylic acid was prepared directly by siphoning the living anionic polymer chain into a THF solution, saturated with CO2, while the polydimethylsiloxane carboxylic acid was obtained by hydrosilylating an unsaturated benzylester with an Si-H terminated polydimethylsiloxane, and cleavage of the ester. The carbodiimide coupling was found to be the best way to connect macrocycles and polymers in high yield and high purity.The polystyrene-ring-polystyrene block copolymers are, depending on the molecular weight of the polystyrene, lyotropic liquid crystals in cyclohexane. The aggregation behavior of the copolymers in solution was investigated in more detail using several technique. As a result it can be concluded that the polystyrene-ring-polystyrene block copolymers can aggregate into hollow cylinder-like objects with an average length of 700 nm by a combination of shape complementary and demixing of rigid and flexible polymer parts. The resulting structure can be described as supramolecular hollow cylindrical brush.If the lyotropic solution of the polystyrene-ring-polystyrene block copolymers are dried, they remain birefringent indicating that the solid state has an ordered structure. The polydimethylsiloxane-ring-polydimethylsiloxane block copolymers are more or less fluid at room temperature, and are all birefringent (termotropic liquid crystals) as well. This is a prove that the copolymers are ordered in the fluid state. By a careful investigation using electron diffraction and wide-angle X-ray scattering, it has been possible to derive a model for the 3D-order of the copolymers. The data indicate a lamella structure for both type of copolymers. The macrocycles are arranged in a layer of columns. These crystalline layers are separated by amorphous layers which contain the polymers substituents.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Investigations on formation and specification of neural precursor cells in the central nervous system of the Drosophila melanogaster embryoSpecification of a unique cell fate during development of a multicellular organism often is a function of its position. The Drosophila central nervous system (CNS) provides an ideal system to dissect signalling events during development that lead to cell specific patterns. Different cell types in the CNS are formed from a relatively few precursor cells, the neuroblasts (NBs), which delaminate from the neurogenic region of the ectoderm. The delamination occurs in five waves, S1-S5, finally leading to a subepidermal layer consisting of about 30 NBs, each with a unique identity, arranged in a stereotyped spatial pattern in each hemisegment. This information depends on several factors such as the concentrations of various morphogens, cell-cell interactions and long range signals present at the position and time of its birth. The early NBs, delaminating during S1 and S2, form an orthogonal array of four rows (2/3,4,5,6/7) and three columns (medial, intermediate, and lateral) . However, the three column and four row-arrangement pattern is only transitory during early stages of neurogenesis which is obscured by late emerging (S3-S5) neuroblasts (Doe and Goodman, 1985; Goodman and Doe, 1993). Therefore the aim of my study has been to identify novel genes which play a role in the formation or specification of late delaminating NBs.In this study the gene anterior open or yan was picked up in a genetic screen to identity novel and yet unidentified genes in the process of late neuroblast formation and specification. I have shown that the gene yan is responsible for maintaining the cells of the neuroectoderm in an undifferentiated state by interfering with the Notch signalling mechanism. Secondly, I have studied the function and interactions of segment polarity genes within a certain neuroectodermal region, namely the engrailed (en) expressing domain, with regard to the fate specification of a set of late neuroblasts, namely NB 6-4 and NB 7-3. I have dissected the regulatory interaction of the segment polarity genes wingless (wg), hedgehog (hh) and engrailed (en) as they maintain each other’s expression to show that En is a prerequisite for neurogenesis and show that the interplay of the segmentation genes naked (nkd) and gooseberry (gsb), both of which are targets of wingless (wg) activity, leads to differential commitment of NB 7-3 and NB 6-4 cell fate. I have shown that in the absence of either nkd or gsb one NB fate is replaced by the other. However, the temporal sequence of delamination is maintained, suggesting that formation and specification of these two NBs are under independent control.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aufbau einer kontinuierlichen, mehrdimensionalen Hochleistungs-flüssigchromatographie-Anlage für die Trennung von Proteinen und Peptiden mit integrierter größenselektiver ProbenfraktionierungEs wurde eine mehrdimensionale HPLC-Trennmethode für Proteine und Peptide mit einem Molekulargewicht von <15 kDa entwickelt.Im ersten Schritt werden die Zielanalyte von höhermolekularen sowie nicht ionischen Bestandteilen mit Hilfe von 'Restricted Access Materialien' (RAM) mit Ionenaustauscher-Funktionalität getrennt. Anschließend werden die Proteine auf einer analytischen Ionenaustauscher-Säule sowie auf Reversed-Phase-Säulen getrennt. Zur Vermeidung von Probenverlusten wurde ein kontinuierlich arbeitendes, voll automatisiertes System auf Basis unterschiedlicher Trenngeschwindigkeiten und vier parallelen RP-Säulen aufgebaut.Es werden jeweils zwei RP-Säulen gleichzeitig, jedoch mit zeitlich versetztem Beginn eluiert, um durch flache Gradienten ausreichende Trennleistungen zu erhalten. Während die dritte Säule regeneriert wird, erfolgt das Beladen der vierte Säule durch Anreicherung der Proteine und Peptide am Säulenkopf. Während der Gesamtanalysenzeit von 96 Minuten werden in Intervallen von 4 Minuten Fraktionen aus der 1. Dimension auf die RP-Säulen überführt und innerhalb von 8 Minuten getrennt, wobei 24 RP-Chromatogramme resultieren.Als Testsubstanzen wurden u.a. Standardproteine, Proteine und Peptide aus humanem Hämofiltrat sowie aus Lungenfibroblast-Zellkulturüberständen eingesetzt. Weiterhin wurden Fraktionen gesammelt und mittels MALDI-TOF Massenspektrometrie untersucht. Bei einer Injektion wurden in den 24 RP-Chromatogrammen mehr als 1000 Peaks aufgelöst. Der theoretische Wert der Peakkapazität liegt bei ungefähr 3000.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The work for the present thesis started in California, during my semester as an exchange student overseas. California is known worldwide for its seismicity and its effort in the earthquake engineering research field. For this reason, I immediately found interesting the Structural Dynamics Professor, Maria Q. Feng's proposal, to work on a pushover analysis of the existing Jamboree Road Overcrossing bridge. Concrete is a popular building material in California, and for the most part, it serves its functions well. However, concrete is inherently brittle and performs poorly during earthquakes if not reinforced properly. The San Fernando Earthquake of 1971 dramatically demonstrated this characteristic. Shortly thereafter, code writers revised the design provisions for new concrete buildings so to provide adequate ductility to resist strong ground shaking. There remain, nonetheless, millions of square feet of non-ductile concrete buildings in California. The purpose of this work is to perform a Pushover Analysis and compare the results with those of a Nonlinear Time-History Analysis of an existing bridge, located in Southern California. The analyses have been executed through the software OpenSees, the Open System for Earthquake Engineering Simulation. The bridge Jamboree Road Overcrossing is classified as a Standard Ordinary Bridge. In fact, the JRO is a typical three-span continuous cast-in-place prestressed post-tension box-girder. The total length of the bridge is 366 ft., and the height of the two bents are respectively 26,41 ft. and 28,41 ft.. Both the Pushover Analysis and the Nonlinear Time-History Analysis require the use of a model that takes into account for the nonlinearities of the system. In fact, in order to execute nonlinear analyses of highway bridges it is essential to incorporate an accurate model of the material behavior. It has been observed that, after the occurrence of destructive earthquakes, one of the most damaged elements on highway bridges is a column. To evaluate the performance of bridge columns during seismic events an adequate model of the column must be incorporated. Part of the work of the present thesis is, in fact, dedicated to the modeling of bents. Different types of nonlinear element have been studied and modeled, with emphasis on the plasticity zone length determination and location. Furthermore, different models for concrete and steel materials have been considered, and the selection of the parameters that define the constitutive laws of the different materials have been accurate. The work is structured into four chapters, to follow a brief overview of the content. The first chapter introduces the concepts related to capacity design, as the actual philosophy of seismic design. Furthermore, nonlinear analyses both static, pushover, and dynamic, time-history, are presented. The final paragraph concludes with a short description on how to determine the seismic demand at a specific site, according to the latest design criteria in California. The second chapter deals with the formulation of force-based finite elements and the issues regarding the objectivity of the response in nonlinear field. Both concentrated and distributed plasticity elements are discussed into detail. The third chapter presents the existing structure, the software used OpenSees, and the modeling assumptions and issues. The creation of the nonlinear model represents a central part in this work. Nonlinear material constitutive laws, for concrete and reinforcing steel, are discussed into detail; as well as the different scenarios employed in the columns modeling. Finally, the results of the pushover analysis are presented in chapter four. Capacity curves are examined for the different model scenarios used, and failure modes of concrete and steel are discussed. Capacity curve is converted into capacity spectrum and intersected with the design spectrum. In the last paragraph, the results of nonlinear time-history analyses are compared to those of pushover analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Il presente lavoro presenta una analisi di sensitività sui parametri progettuali più significativi per i sistemi di ancoraggio di dispositivi di produzione di energia del mare di tipo galleggiante, comunemente conosciuti come Floating Wave Energy Converters (F-WEC). I convertitori di questo tipo sono installati offshore e possono basarsi su diversi principi di funzionamento per la produzione di energia: lo sfruttamento del moto oscillatorio dell’onda (chiamati Wave Active Bodies, gran parte di convertitori appartengono la tecnologia di questo tipo), la tracimazione delle onde (Overtopping Devices), o il principio della colonna d’acqua oscillante (Oscillating Water Columns). La scelta del luogo di installazione dei tali dispositivi implica una adeguata progettazione del sistema di ancoraggio che ha lo scopo di mantenere il dispositivo in un intorno sufficientemente piccolo del punto dove è stato originariamente collocato. Allo stesso tempo, dovrebbero considerarsi come elemento integrato del sistema da progettare al fine di aumentare l’efficienza d’estrazione della potenza d’onda. Le problematiche principali relativi ai sistemi di ancoraggio sono: la resistenza del sistema (affidabilità, fatica) e l’economicità. Le due problematiche sono legate tra di loro in quanto dall’aumento del resistenza dipende l’aumento della complessità del sistema di ancoraggio (aumentano il numero delle linee, si utilizzano diametri maggiori, aumenta il peso per unità di lunghezza per ogni linea, ecc.). E’ però chiaro che sistemi più affidabili consentirebbero di abbassare i costi di produzione e renderebbero certamente più competitiva l’energia da onda sul mercato energetico. I dispositivi individuali richiedono approcci progettuali diversi e l’economia di un sistema di ormeggio è strettamente legata al design del dispositivo stesso. Esistono, ad oggi, una serie di installazioni a scala quasi di prototipo di sistemi WEC che hanno fallito a causa del collasso per proprio sistema di ancoraggio, attirando così l’attenzione sul problema di una progettazione efficiente, affidabile e sicura.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis deals with an investigation of Decomposition and Reformulation to solve Integer Linear Programming Problems. This method is often a very successful approach computationally, producing high-quality solutions for well-structured combinatorial optimization problems like vehicle routing, cutting stock, p-median and generalized assignment . However, until now the method has always been tailored to the specific problem under investigation. The principal innovation of this thesis is to develop a new framework able to apply this concept to a generic MIP problem. The new approach is thus capable of auto-decomposition and autoreformulation of the input problem applicable as a resolving black box algorithm and works as a complement and alternative to the normal resolving techniques. The idea of Decomposing and Reformulating (usually called in literature Dantzig and Wolfe Decomposition DWD) is, given a MIP, to convexify one (or more) subset(s) of constraints (slaves) and working on the partially convexified polyhedron(s) obtained. For a given MIP several decompositions can be defined depending from what sets of constraints we want to convexify. In this thesis we mainly reformulate MIPs using two sets of variables: the original variables and the extended variables (representing the exponential extreme points). The master constraints consist of the original constraints not included in any slaves plus the convexity constraint(s) and the linking constraints(ensuring that each original variable can be viewed as linear combination of extreme points of the slaves). The solution procedure consists of iteratively solving the reformulated MIP (master) and checking (pricing) if a variable of reduced costs exists, and in which case adding it to the master and solving it again (columns generation), or otherwise stopping the procedure. The advantage of using DWD is that the reformulated relaxation gives bounds stronger than the original LP relaxation, in addition it can be incorporated in a Branch and bound scheme (Branch and Price) in order to solve the problem to optimality. If the computational time for the pricing problem is reasonable this leads in practice to a stronger speed up in the solution time, specially when the convex hull of the slaves is easy to compute, usually because of its special structure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Design parameters, process flows, electro-thermal-fluidic simulations and experimental characterizations of Micro-Electro-Mechanical-Systems (MEMS) suited for gas-chromatographic (GC) applications are presented and thoroughly described in this thesis, whose topic belongs to the research activities the Institute for Microelectronics and Microsystems (IMM)-Bologna is involved since several years, i.e. the development of micro-systems for chemical analysis, based on silicon micro-machining techniques and able to perform analysis of complex gaseous mixtures, especially in the field of environmental monitoring. In this regard, attention has been focused on the development of micro-fabricated devices to be employed in a portable mini-GC system for the analysis of aromatic Volatile Organic Compounds (VOC) like Benzene, Toluene, Ethyl-benzene and Xylene (BTEX), i.e. chemical compounds which can significantly affect environment and human health because of their demonstrated carcinogenicity (benzene) or toxicity (toluene, xylene) even at parts per billion (ppb) concentrations. The most significant results achieved through the laboratory functional characterization of the mini-GC system have been reported, together with in-field analysis results carried out in a station of the Bologna air monitoring network and compared with those provided by a commercial GC system. The development of more advanced prototypes of micro-fabricated devices specifically suited for FAST-GC have been also presented (silicon capillary columns, Ultra-Low-Power (ULP) Metal OXide (MOX) sensor, Thermal Conductivity Detector (TCD)), together with the technological processes for their fabrication. The experimentally demonstrated very high sensitivity of ULP-MOX sensors to VOCs, coupled with the extremely low power consumption, makes the developed ULP-MOX sensor the most performing metal oxide sensor reported up to now in literature, while preliminary test results proved that the developed silicon capillary columns are capable of performances comparable to those of the best fused silica capillary columns. Finally, the development and the validation of a coupled electro-thermal Finite Element Model suited for both steady-state and transient analysis of the micro-devices has been described, and subsequently implemented with a fluidic part to investigate devices behaviour in presence of a gas flowing with certain volumetric flow rates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Therapeutisches Drug Monitoring (TDM) wird zur individuellen Dosiseinstellung genutzt, um die Effizienz der Medikamentenwirkung zu steigern und das Auftreten von Nebenwirkungen zu senken. Für das TDM von Antipsychotika und Antidepressiva besteht allerdings das Problem, dass es mehr als 50 Medikamente gibt. Ein TDM-Labor muss dementsprechend über 50 verschiedene Wirkstoffe und zusätzlich aktive Metaboliten messen. Mit der Flüssigchromatographie (LC oder HPLC) ist die Analyse vieler unterschiedlicher Medikamente möglich. LC mit Säulenschaltung erlaubt eine Automatisierung. Dabei wird Blutserum oder -plasma mit oder ohne vorherige Proteinfällung auf eine Vorsäule aufgetragen. Nach Auswaschen von störenden Matrixbestandteilen werden die Medikamente auf einer nachgeschalteten analytischen Säule getrennt und über Ultraviolettspektroskopie (UV) oder Massenspektrometrie (MS) detektiert. Ziel dieser Arbeit war es, LC-Methoden zu entwickeln, die die Messung möglichst vieler Antipsychotika und Antidepressiva erlaubt und die für die TDM-Routine geeignet ist. Eine mit C8-modifiziertem Kieselgel gefüllte Säule (20 µm 10x4.0 mm I.D.) erwies sich in Vorexperimenten als optimal geeignet bezüglich Extraktionsverhalten, Regenerierbarkeit und Stabilität. Mit einer ersten HPLC-UV-Methode mit Säulenschaltung konnten 20 verschiedene Psychopharmaka einschließlich ihrer Metabolite, also insgesamt 30 verschiedene Substanzen quantitativ erfasst werden. Die Analysenzeit betrug 30 Minuten. Die Vorsäule erlaubte 150 Injektionen, die analytische Säule konnte mit mehr als 300 Plasmainjektionen belastet werden. Abhängig vom Analyten, musste allerdings das Injektionsvolumen, die Flussrate oder die Detektionswellenlänge verändert werden. Die Methode war daher für eine Routineanwendung nur eingeschränkt geeignet. Mit einer zweiten HPLC-UV-Methode konnten 43 verschiedene Antipsychotika und Antidepressiva inklusive Metaboliten nachgewiesen werden. Nach Vorreinigung über C8-Material (10 µm, 10x4 mm I.D.) erfolgte die Trennung auf Hypersil ODS (5 µm Partikelgröße) in der analytischen Säule (250x4.6 mm I.D.) mit 37.5% Acetonitril im analytischen Eluenten. Die optimale Flussrate war 1.5 ml/min und die Detektionswellenlänge 254 nm. In einer Einzelprobe, konnten mit dieser Methode 7 bis 8 unterschiedliche Substanzen gemessen werden. Für die Antipsychotika Clozapin, Olanzapin, Perazin, Quetiapin und Ziprasidon wurde die Methode validiert. Der Variationskoeffizient (VK%) für die Impräzision lag zwischen 0.2 und 6.1%. Im erforderlichen Messbereich war die Methode linear (Korrelationskoeffizienten, R2 zwischen 0.9765 und 0.9816). Die absolute und analytische Wiederfindung lagen zwischen 98 und 118 %. Die für das TDM erforderlichen unteren Nachweisgrenzen wurden erreicht. Für Olanzapin betrug sie 5 ng/ml. Die Methode wurde an Patienten für das TDM getestet. Sie erwies sich für das TDM als sehr gut geeignet. Nach retrospektiver Auswertung von Patientendaten konnte erstmalig ein möglicher therapeutischer Bereich für Quetiapin (40-170 ng/ml) und Ziprasidon (40-130 ng/ml) formuliert werden. Mit einem Massenspektrometer als Detektor war die Messung von acht Neuroleptika und ihren Metaboliten möglich. 12 Substanzen konnten in einem Lauf bestimmt werden: Amisulprid, Clozapin, N-Desmethylclozapin, Clozapin-N-oxid, Haloperidol, Risperidon, 9-Hydroxyrisperidon, Olanzapin, Perazin, N-Desmethylperazin, Quetiapin und Ziprasidon. Nach Vorreinigung mit C8-Material (20 µm 10x4.0 mm I.D.) erfolgte die Trennung auf Synergi MAX-RP C12 (4 µm 150 x 4.6 mm). Die Validierung der HPLC-MS-Methode belegten einen linearen Zusammenhang zwischen Konzentration und Detektorsignal (R2= 0,9974 bis 0.9999). Die Impräzision lag zwischen 0.84 bis 9.78%. Die für das TDM erforderlichen unteren Nachweisgrenzen wurden erreicht. Es gab keine Hinweise auf das Auftreten von Ion Suppression durch Matrixbestandteile. Die absolute und analytische Wiederfindung lag zwischen 89 und 107 %. Es zeigte sich, dass die HPLC-MS-Methode ohne Modifikation erweitert werden kann und anscheinend mehr als 30 verschiedene Psychopharmaka erfasst werden können. Mit den entwickelten flüssigchromatographischen Methoden stehen neue Verfahren für das TDM von Antipsychotika und Antidepressiva zur Verfügung, die es erlauben, mit einer Methode verschiedene Psychopharmaka und ihre aktiven Metabolite zu messen. Damit kann die Behandlung psychiatrischer Patienten insbesondere mit Antipsychotika verbessert werden.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The focus of this thesis was the in-situ application of the new analytical technique "GCxGC" in both the marine and continental boundary layer, as well as in the free troposphere. Biogenic and anthropogenic VOCs were analysed and used to characterise local chemistry at the individual measurement sites. The first part of the thesis work was the characterisation of a new set of columns that was to be used later in the field. To simplify the identification, a time-of-flight mass spectrometer (TOF-MS) detector was coupled to the GCxGC. In the field the TOF-MS was substituted by a more robust and tractable flame ionisation detector (FID), which is more suitable for quantitative measurements. During the process, a variety of volatile organic compounds could be assigned to different environmental sources, e.g. plankton sources, eucalyptus forest or urban centers. In-situ measurements of biogenic and anthropogenic VOCs were conducted at the Meteorological Observatory Hohenpeissenberg (MOHP), Germany, applying a thermodesorption-GCxGC-FID system. The measured VOCs were compared to GC-MS measurements routinely conducted at the MOHP as well as to PTR-MS measurements. Furthermore, a compressed ambient air standard was measured from three different gas chromatographic instruments and the results were compared. With few exceptions, the in-situ, as well as the standard measurements, revealed good agreement between the individual instruments. Diurnal cycles were observed, with differing patterns for the biogenic and the anthropogenic compounds. The variability-lifetime relationship of compounds with atmospheric lifetimes from a few hours to a few days in presence of O3 and OH was examined. It revealed a weak but significant influence of chemistry on these short-lived VOCs at the site. The relationship was also used to estimate the average OH radical concentration during the campaign, which was compared to in-situ OH measurements (1.7 x 10^6 molecules/cm^3, 0.071 ppt) for the first time. The OH concentration ranging from 3.5 to 6.5 x 10^5 molecules/cm^3 (0.015 to 0.027 ppt) obtained with this method represents an approximation of the average OH concentration influencing the discussed VOCs from emission to measurement. Based on these findings, the average concentration of the nighttime NO3 radicals was estimated using the same approach and found to range from 2.2 to 5.0 x 10^8 molecules/cm^3 (9.2 to 21.0 ppt). During the MINATROC field campaign, in-situ ambient air measurements with the GCxGC-FID were conducted at Tenerife, Spain. Although the station is mainly situated in the free troposphere, local influences of anthropogenic and biogenic VOCs were observed. Due to a strong dust event originating from Western Africa it was possible to compare the mixing ratios during normal and elevated dust loading in the atmosphere. The mixing ratios during the dust event were found to be lower. However, this could not be attributed to heterogeneous reactions as there was a change in the wind direction from northwesterly to southeasterly during the dust event.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, an improved protocol for inverse size-exclusion chromatography (ISEC) was established to assess important pore structural data of porous silicas as stationary phases in packed chromatographic columns. After the validity of the values generated by ISEC was checked by comparison with data obtained from traditional methods like nitrogen sorption at 77 K (Study A), the method could be successfully employed as valuable tool at the development of bonded poly(methacrylate)-coated silicas, while traditional methods generate partially incorrect pore structural information (Study B). Study A: Different mesoporous silicas were converted by a pseudomorphical transition into ordered MCM-41-type silica while maintaining the particle-size and -shape. The essential parameters like specific surface area, average pore diameter and specific pore volume, the pore connectivity from ISEC remained nearly the same which was reflected by the same course of the theoretical plate height vs. linear velocity curves. Study B: In the development of bonded poly(methacrylate)-coated silicas for the reversed phase separation of biopolymers, ISEC was the only method to generate valid pore structural information of the polymer-coated materials. Synthesis procedures were developed to obtain reproducibly covalently bonded poly(methacrylate) coatings with good thermal stability on different base materials, employing as well particulate and monolithic materials.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Iodine chemistry plays an important role in the tropospheric ozone depletion and the new particle formation in the Marine Boundary Layer (MBL). The sources, reaction pathways, and the sinks of iodine are investigated using lab experiments and field observations. The aims of this work are, firstly, to develop analytical methods for iodine measurements of marine aerosol samples especially for iodine speciation in the soluble iodine; secondly, to apply the analytical methods in field collected aerosol samples, and to estimate the characteristics of aerosol iodine in the MBL. Inductively Coupled Plasma – Mass Spectrometry (ICP-MS) was the technique used for iodine measurements. Offline methods using water extraction and Tetra-methyl-ammonium-hydroxide (TMAH) extraction were applied to measure total soluble iodine (TSI) and total insoluble iodine (TII) in the marine aerosol samples. External standard calibration and isotope dilution analysis (IDA) were both conducted for iodine quantification and the limits of detection (LODs) were both 0.1 μg L-1 for TSI and TII measurements. Online couplings of Ion Chromatography (IC)-ICP-MS and Gel electrophoresis (GE)-ICP-MS were both developed for soluble iodine speciation. Anion exchange columns were adopted for IC-ICP-MS systems. Iodide, iodate, and unknown signal(s) were observed in these methods. Iodide and iodate were separated successfully and the LODs were 0.1 and 0.5 μg L-1, respectively. Unknown signals were soluble organic iodine species (SOI) and quantified by the calibration curve of iodide, but not clearly identified and quantified yet. These analytical methods were all applied to the iodine measurements of marine aerosol samples from the worldwide filed campaigns. The TSI and TII concentrations (medians) in PM2.5 were found to be 240.87 pmol m-3 and 105.37 pmol m-3 at Mace Head, west coast of Ireland, as well as 119.10 pmol m-3 and 97.88 pmol m-3 in the cruise campaign over the North Atlantic Ocean, during June – July 2006. Inorganic iodine, namely iodide and iodate, was the minor iodine fraction in both campaigns, accounting for 7.3% (median) and 5.8% (median) in PM2.5 iodine at Mace Head and over the North Atlantic Ocean, respectively. Iodide concentrations were higher than iodate in most of the samples. In the contrast, more than 90% of TSI was SOI and the SOI concentration was correlated significantly with the iodide concentration. The correlation coefficients (R2) were both higher than 0.5 at Mace Head and in the first leg of the cruise. Size fractionated aerosol samples collected by 5 stage Berner impactor cascade sampler showed similar proportions of inorganic and organic iodine. Significant correlations were obtained in the particle size ranges of 0.25 – 0.71 μm and 0.71 – 2.0 μm between SOI and iodide, and better correlations were found in sunny days. TSI and iodide existed mainly in fine particle size range (< 2.0 μm) and iodate resided in coarse range (2.0 – 10 μm). Aerosol iodine was suggested to be related to the primary iodine release in the tidal zone. Natural meteorological conditions such as solar radiation, raining etc were observed to have influence on the aerosol iodine. During the ship campaign over the North Atlantic Ocean (January – February 2007), the TSI concentrations (medians) ranged 35.14 – 60.63 pmol m-3 among the 5 stages. Likewise, SOI was found to be the most abundant iodine fraction in TSI with a median of 98.6%. Significant correlation also presented between SOI and iodide in the size range of 2.0 – 5.9 μm. Higher iodate concentration was again found in the higher particle size range, similar to that at Mace Head. Airmass transport from the biogenic bloom region and the Antarctic ice front sector was observed to play an important role in aerosol iodine enhancement. The TSI concentrations observed along the 30,000 km long cruise round trip from East Asia to Antarctica during November 2005 – March 2006 were much lower than in the other campaigns, with a median of 6.51 pmol m-3. Approximately 70% of the TSI was SOI on average. The abundances of inorganic iodine including iodine and iodide were less than 30% of TSI. The median value of iodide was 1.49 pmol m-3, which was more than four fold higher than that of iodate (median, 0.28 pmol m-3). Spatial variation indicated highest aerosol iodine appearing in the tropical area. Iodine level was considerably lower in coastal Antarctica with the TSI median of 3.22 pmol m-3. However, airmass transport from the ice front sector was correlated with the enhance TSI level, suggesting the unrevealed source of iodine in the polar region. In addition, significant correlation between SOI and iodide was also shown in this campaign. A global distribution in aerosol was shown in the field campaigns in this work. SOI was verified globally ubiquitous due to the presence in the different sampling locations and its high proportion in TSI in the marine aerosols. The correlations between SOI and iodide were obtained not only in different locations but also in different seasons, implying the possible mechanism of iodide production through SOI decomposition. Nevertheless, future studies are needed for improving the current understanding of iodine chemistry in the MBL (e.g. SOI identification and quantification as well as the update modeling involving organic matters).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Five different methods were critically examined to characterize the pore structure of the silica monoliths. The mesopore characterization was performed using: a) the classical BJH method of nitrogen sorption data, which showed overestimated values in the mesopore distribution and was improved by using the NLDFT method, b) the ISEC method implementing the PPM and PNM models, which were especially developed for monolithic silicas, that contrary to the particulate supports, demonstrate the two inflection points in the ISEC curve, enabling the calculation of pore connectivity, a measure for the mass transfer kinetics in the mesopore network, c) the mercury porosimetry using a new recommended mercury contact angle values. rnThe results of the characterization of mesopores of monolithic silica columns by the three methods indicated that all methods were useful with respect to the pore size distribution by volume, but only the ISEC method with implemented PPM and PNM models gave the average pore size and distribution based on the number average and the pore connectivity values.rnThe characterization of the flow-through pore was performed by two different methods: a) the mercury porosimetry, which was used not only for average flow-through pore value estimation, but also the assessment of entrapment. It was found that the mass transfer from the flow-through pores to mesopores was not hindered in case of small sized flow-through pores with a narrow distribution, b) the liquid penetration where the average flow-through pore values were obtained via existing equations and improved by the additional methods developed according to Hagen-Poiseuille rules. The result was that not the flow-through pore size influences the column bock pressure, but the surface area to volume ratio of silica skeleton is most decisive. Thus the monolith with lowest ratio values will be the most permeable. rnThe flow-through pore characterization results obtained by mercury porosimetry and liquid permeability were compared with the ones from imaging and image analysis. All named methods enable a reliable characterization of the flow-through pore diameters for the monolithic silica columns, but special care should be taken about the chosen theoretical model.rnThe measured pore characterization parameters were then linked with the mass transfer properties of monolithic silica columns. As indicated by the ISEC results, no restrictions in mass transfer resistance were noticed in mesopores due to their high connectivity. The mercury porosimetry results also gave evidence that no restrictions occur for mass transfer from flow-through pores to mesopores in the small scaled silica monoliths with narrow distribution. rnThe prediction of the optimum regimes of the pore structural parameters for the given target parameters in HPLC separations was performed. It was found that a low mass transfer resistance in the mesopore volume is achieved when the nominal diameter of the number average size distribution of the mesopores is appr. an order of magnitude larger that the molecular radius of the analyte. The effective diffusion coefficient of an analyte molecule in the mesopore volume is strongly dependent on the value of the nominal pore diameter of the number averaged pore size distribution. The mesopore size has to be adapted to the molecular size of the analyte, in particular for peptides and proteins. rnThe study on flow-through pores of silica monoliths demonstrated that the surface to volume of the skeletons ratio and external porosity are decisive for the column efficiency. The latter is independent from the flow-through pore diameter. The flow-through pore characteristics by direct and indirect approaches were assessed and theoretical column efficiency curves were derived. The study showed that next to the surface to volume ratio, the total porosity and its distribution of the flow-through pores and mesopores have a substantial effect on the column plate number, especially as the extent of adsorption increases. The column efficiency is increasing with decreasing flow through pore diameter, decreasing with external porosity, and increasing with total porosity. Though this tendency has a limit due to heterogeneity of the studied monolithic samples. We found that the maximum efficiency of the studied monolithic research columns could be reached at a skeleton diameter of ~ 0.5 µm. Furthermore when the intention is to maximize the column efficiency, more homogeneous monoliths should be prepared.rn