940 resultados para Convective Constraint Release
Resumo:
Nel lavoro di tesi qui presentato si indaga l'applicazione di tecniche di apprendimento mirate ad una più efficiente esecuzione di un portfolio di risolutore di vincoli (constraint solver). Un constraint solver è un programma che dato in input un problema di vincoli, elabora una soluzione mediante l'utilizzo di svariate tecniche. I problemi di vincoli sono altamente presenti nella vita reale. Esempi come l'organizzazione dei viaggi dei treni oppure la programmazione degli equipaggi di una compagnia aerea, sono tutti problemi di vincoli. Un problema di vincoli è formalizzato da un problema di soddisfacimento di vincoli(CSP). Un CSP è descritto da un insieme di variabili che possono assumere valori appartenenti ad uno specico dominio ed un insieme di vincoli che mettono in relazione variabili e valori assumibili da esse. Una tecnica per ottimizzare la risoluzione di tali problemi è quella suggerita da un approccio a portfolio. Tale tecnica, usata anche in am- biti come quelli economici, prevede la combinazione di più solver i quali assieme possono generare risultati migliori di un approccio a singolo solver. In questo lavoro ci preoccupiamo di creare una nuova tecnica che combina un portfolio di constraint solver con tecniche di machine learning. Il machine learning è un campo di intelligenza articiale che si pone l'obiettivo di immettere nelle macchine una sorta di `intelligenza'. Un esempio applicativo potrebbe essere quello di valutare i casi passati di un problema ed usarli in futuro per fare scelte. Tale processo è riscontrato anche a livello cognitivo umano. Nello specico, vogliamo ragionare in termini di classicazione. Una classicazione corrisponde ad assegnare ad un insieme di caratteristiche in input, un valore discreto in output, come vero o falso se una mail è classicata come spam o meno. La fase di apprendimento sarà svolta utilizzando una parte di CPHydra, un portfolio di constraint solver sviluppato presso la University College of Cork (UCC). Di tale algoritmo a portfolio verranno utilizzate solamente le caratteristiche usate per descrivere determinati aspetti di un CSP rispetto ad un altro; queste caratteristiche vengono altresì dette features. Creeremo quindi una serie di classicatori basati sullo specifico comportamento dei solver. La combinazione di tali classicatori con l'approccio a portfolio sara nalizzata allo scopo di valutare che le feature di CPHydra siano buone e che i classicatori basati su tali feature siano affidabili. Per giusticare il primo risultato, eettueremo un confronto con uno dei migliori portfolio allo stato dell'arte, SATzilla. Una volta stabilita la bontà delle features utilizzate per le classicazioni, andremo a risolvere i problemi simulando uno scheduler. Tali simulazioni testeranno diverse regole costruite con classicatori precedentemente introdotti. Prima agiremo su uno scenario ad un processore e successivamente ci espanderemo ad uno scenario multi processore. In questi esperimenti andremo a vericare che, le prestazioni ottenute tramite l'applicazione delle regole create appositamente sui classicatori, abbiano risultati migliori rispetto ad un'esecuzione limitata all'utilizzo del migliore solver del portfolio. I lavoro di tesi è stato svolto in collaborazione con il centro di ricerca 4C presso University College Cork. Su questo lavoro è stato elaborato e sottomesso un articolo scientico alla International Joint Conference of Articial Intelligence (IJCAI) 2011. Al momento della consegna della tesi non siamo ancora stati informati dell'accettazione di tale articolo. Comunque, le risposte dei revisori hanno indicato che tale metodo presentato risulta interessante.
Resumo:
Il lavoro presentato in questa tesi si colloca nel contesto della programmazione con vincoli, un paradigma per modellare e risolvere problemi di ricerca combinatoria che richiedono di trovare soluzioni in presenza di vincoli. Una vasta parte di questi problemi trova naturale formulazione attraverso il linguaggio delle variabili insiemistiche. Dal momento che il dominio di tali variabili può essere esponenziale nel numero di elementi, una rappresentazione esplicita è spesso non praticabile. Recenti studi si sono quindi focalizzati nel trovare modi efficienti per rappresentare tali variabili. Pertanto si è soliti rappresentare questi domini mediante l'uso di approssimazioni definite tramite intervalli (d'ora in poi rappresentazioni), specificati da un limite inferiore e un limite superiore secondo un'appropriata relazione d'ordine. La recente evoluzione della ricerca sulla programmazione con vincoli sugli insiemi ha chiaramente indicato che la combinazione di diverse rappresentazioni permette di raggiungere prestazioni di ordini di grandezza superiori rispetto alle tradizionali tecniche di codifica. Numerose proposte sono state fatte volgendosi in questa direzione. Questi lavori si differenziano su come è mantenuta la coerenza tra le diverse rappresentazioni e su come i vincoli vengono propagati al fine di ridurre lo spazio di ricerca. Sfortunatamente non esiste alcun strumento formale per paragonare queste combinazioni. Il principale obiettivo di questo lavoro è quello di fornire tale strumento, nel quale definiamo precisamente la nozione di combinazione di rappresentazioni facendo emergere gli aspetti comuni che hanno caratterizzato i lavori precedenti. In particolare identifichiamo due tipi possibili di combinazioni, una forte ed una debole, definendo le nozioni di coerenza agli estremi sui vincoli e sincronizzazione tra rappresentazioni. Il nostro studio propone alcune interessanti intuizioni sulle combinazioni esistenti, evidenziandone i limiti e svelando alcune sorprese. Inoltre forniamo un'analisi di complessità della sincronizzazione tra minlex, una rappresentazione in grado di propagare in maniera ottimale vincoli lessicografici, e le principali rappresentazioni esistenti.
Resumo:
A new formulate containing citokinins, that is commercialized as Cytokin, has been introduced as dormancy breaking agents. During a three-years study, Cytokin was applied at different concentrations and application times in two producing areas of the Emilia-Romagna region to verify its efficacy as a DBA. Cytokin application increased the bud break and showed a lateral flower thinning effect. Moreover, treated vines showed an earlier and more uniform flowering as compared to control ones. Results obtained on the productive performance revealed a constant positive effect in the fruit fresh weight at harvest. Moreover, Cytokin did not cause any phytotoxicity even at the highest concentrations. Starting from the field observation, which suggested the involvement of cytokinins in kiwifruit bud release from dormancy, 6-BA was applied in open field condition and molecular and histological analyses were carried out in kiwifruit buds collected starting from the endo dormant period up to complete bud break to compare the natural occurring situation to the one induced by exogenous cytokinin application. In details, molecular analyses were set up on to verify the expression of genes involved in the reactivation of cell cycle: cyclin D3, histone H4, cyclin-dependent kinase B, as well as of others which are known to be up regulated during bud release in other species, i.e.isopenteniltransferases (IPTs), which catalyze the first step in the CK biosynthesis, and sucrose synthase 1 and A, which are involved in the sugar supplied. Moreover, histological analyses of the cell division rate in kiwifruit bud apical meristems were performed. These analyses showed a reactivation of the cell divisions during bud release and changes in the expression level of the investigated genes.
Resumo:
ZusammenfassungDie Sekretion von Arzneistoffen aus Darmzellen zurück ins Darmlumen, die durch intestinale Transporter wie P-Glykoprotein (P-GP) vermittelt wird, stellt eine bekannte Quelle für unvollständige und variable Bioverfügbarkeiten und für Interaktionen mit anderen Arzneimitteln und Nahrungsbestandteilen dar. Dennoch liegen bisher keine Veröffentlichungen vor, die sich mit daraus resultierenden Konsequenzen für die Entwicklung neuer peroraler Darreichungsformen befassen. Ziel der vorliegenden Arbeit war es, deutlich zu machen, dass dem Auftreten von intestinalen Sekretionsphänomenen bei der Entwicklung von Retardarzneimitteln Rechnung getragen werden muss.Dazu wurden effektive Permeabilitäten für den Modellarzneistoff Talinolol in unterschiedlichen Darmabschnitten anhand eines Rattendarmperfusionsmodells bestimmt.Des weiteren wurde eine Retardformulierung für den Modellarzneistoff Talinolol entwickelt. Dabei wurde gezeigt, dass die Verwendung unterschiedlicher Puffer als Wirkstofffreisetzungmedien zur Ausbildung unterschiedlicher Talinolol-Kristallstrukturen führt.Die neu entwickelten Retardmatrixtabletten wurden mit Hilfe des Pharmakokinetik-Computersoftwareprogrammes Gastro Plus® evaluiert. Das Zusammenspiel von verlangsamter Wirkstofffreigabe aus der Arzneiform und intestinaler Sekretion führte zu einer deutlich verringerten Bioverfügbarkeit der Modellsubstanz Talinolol aus der Retardformulierung im Vergleich zu schnellfreisetzenden Arzneiformen.Daher sollte der Einfluß intestinaler sekretorischer Transporter wie P-GP bei der Entwicklung von Retardarzneiformen unbedingt berücksichtigt werden.
Resumo:
This work presents hybrid Constraint Programming (CP) and metaheuristic methods for the solution of Large Scale Optimization Problems; it aims at integrating concepts and mechanisms from the metaheuristic methods to a CP-based tree search environment in order to exploit the advantages of both approaches. The modeling and solution of large scale combinatorial optimization problem is a topic which has arisen the interest of many researcherers in the Operations Research field; combinatorial optimization problems are widely spread in everyday life and the need of solving difficult problems is more and more urgent. Metaheuristic techniques have been developed in the last decades to effectively handle the approximate solution of combinatorial optimization problems; we will examine metaheuristics in detail, focusing on the common aspects of different techniques. Each metaheuristic approach possesses its own peculiarities in designing and guiding the solution process; our work aims at recognizing components which can be extracted from metaheuristic methods and re-used in different contexts. In particular we focus on the possibility of porting metaheuristic elements to constraint programming based environments, as constraint programming is able to deal with feasibility issues of optimization problems in a very effective manner. Moreover, CP offers a general paradigm which allows to easily model any type of problem and solve it with a problem-independent framework, differently from local search and metaheuristic methods which are highly problem specific. In this work we describe the implementation of the Local Branching framework, originally developed for Mixed Integer Programming, in a CP-based environment. Constraint programming specific features are used to ease the search process, still mantaining an absolute generality of the approach. We also propose a search strategy called Sliced Neighborhood Search, SNS, that iteratively explores slices of large neighborhoods of an incumbent solution by performing CP-based tree search and encloses concepts from metaheuristic techniques. SNS can be used as a stand alone search strategy, but it can alternatively be embedded in existing strategies as intensification and diversification mechanism. In particular we show its integration within the CP-based local branching. We provide an extensive experimental evaluation of the proposed approaches on instances of the Asymmetric Traveling Salesman Problem and of the Asymmetric Traveling Salesman Problem with Time Windows. The proposed approaches achieve good results on practical size problem, thus demonstrating the benefit of integrating metaheuristic concepts in CP-based frameworks.
Resumo:
A numerical model for studying the influences of deep convective cloud systems on photochemistry was developed based on a non-hydrostatic meteorological model and chemistry from a global chemistry transport model. The transport of trace gases, the scavenging of soluble trace gases, and the influences of lightning produced nitrogen oxides (NOx=NO+NO2) on the local ozone-related photochemistry were investigated in a multi-day case study for an oceanic region located in the tropical western Pacific. Model runs considering influences of large scale flows, previously neglected in multi-day cloud resolving and single column model studies of tracer transport, yielded that the influence of the mesoscale subsidence (between clouds) on trace gas transport was considerably overestimated in these studies. The simulated vertical transport and scavenging of highly soluble tracers were found to depend on the initial profiles, reconciling contrasting results from two previous studies. Influences of the modeled uptake of trace gases by hydrometeors in the liquid and the ice phase were studied in some detail for a small number of atmospheric trace gases and novel aspects concerning the role of the retention coefficient (i.e. the fraction of a dissolved trace gas that is retained in the ice phase upon freezing) on the vertical transport of highly soluble gases were illuminated. Including lightning NOx production inside a 500 km 2-D model domain was found to be important for the NOx budget and caused small to moderate changes in the domain averaged ozone concentrations. A number of sensitivity studies yielded that the fraction of lightning associated NOx which was lost through photochemical reactions in the vicinity of the lightning source was considerable, but strongly depended on assumptions about the magnitude and the altitude of the lightning NOx source. In contrast to a suggestion from an earlier study, it was argued that the near zero upper tropospheric ozone mixing ratios which were observed close to the study region were most probably not caused by the formation of NO associated with lightning. Instead, it was argued in agreement with suggestions from other studies that the deep convective transport of ozone-poor air masses from the relatively unpolluted marine boundary layer, which have most likely been advected horizontally over relatively large distances (both before and after encountering deep convection) probably played a role. In particular, it was suggested that the ozone profiles observed during CEPEX (Central Equatorial Pacific Experiment) were strongly influenced by the deep convection and the larger scale flow which are associated with the intra-seasonal oscillation.
Resumo:
Mining and processing of metal ores are important causes of soil and groundwater contamination in many regions worldwide. Metal contaminations are a serious risk for the environment and human health. The assessment of metal contaminations in the soil is therefore an important task. A common approach to assess the environmental risk emanating from inorganic contaminations to soil and groundwater is the use of batch or column leaching tests. In this regard, the suitability of leaching tests is a controversial issue. In the first part of this work the applicability and comparability of common leaching tests in the scope of groundwater risk assessment of inorganic contamination is reviewed and critically discussed. Soil water sampling methods (the suction cup method and centrifugation) are addressed as an alternative to leaching tests. Reasons for limitations of the comparability of leaching test results are exposed and recommendations are given for the expedient application of leaching tests for groundwater risk assessment. Leaching tests are usually carried out in open contact with the atmosphere disregarding possible changes of redox conditions. This can affect the original metal speciation and distribution, particularly when anoxic samples are investigated. The influence of sample storage on leaching test results of sulfide bearing anoxic material from a former flotation dump is investigated in a long-term study. Since the oxidation of the sulfide-bearing samples leads to a significant overestimation of metal release, a feasible modification for the conduction of common leaching tests for anoxic material is proposed, where oxidation is prevented efficiently. A comparison of leaching test results to soil water analyzes have shown that the modified saturation soil extraction (SSE) is found to be the only of the tested leaching procedures, which can be recommended for the assessment of current soil water concentrations at anoxic sites if direct investigation of the soil water is impossible due to technical reasons. The vertical distribution and speciation of Zn and Pb in the flotation residues as well as metal concentrations in soil water and plants were investigated to evaluate the environmental risk arising from this site due to the release of metals. The variations in pH and inorganic C content show an acidification of the topsoil with pH values down to 5.5 in the soil and a soil water pH of 6 in 1 m depth. This is due to the oxidation of sulfides and depletion in carbonates. In the anoxic subsoil pH conditions are still neutral and soil water collected with suction cups is in equilibrium with carbonate minerals. Results from extended x-ray absorption fine-structure (EXAFS) spectroscopy confirm that Zn is mainly bound in sphalerite in the subsoil and weathering reactions lead to a redistribution of Zn in the topsoil. A loss of 35% Zn and S from the topsoil compared to the parent material with 10 g/kg Zn has been observed. 13% of total Zn in the topsoil can be regarded as mobile or easily mobilizable according to sequential chemical extractions (SCE). Zn concentrations of 10 mg/L were found in the soil water, where pH is acidic. Electron supply and the buffer capacity of the soil were identified as main factors controlling Zn mobility and release to the groundwater. Variable Pb concentrations up to 30 µg/L were observed in the soil water. In contrast to Zn, Pb is enriched in the mobile fraction of the oxidized topsoil by a factor of 2 compared to the subsoil with 2 g/kg Pb. 80% of the cation exchange capacity in the topsoil is occupied by Pb. Therefore, plant uptake and bioavailability are of major concern. If the site is not prevented from proceeding acidification in the future, a significant release of Zn, S, and Pb to the groundwater has to be expected. Results from this study show that the assessment of metal release especially from sulfide bearing anoxic material requires an extensive comprehension of leaching mechanisms on the one hand and on weathering processes, which influence the speciation and the mobility of metals, on the other hand. Processes, which may change redox and pH conditions in the future, have to be addressed to enable sound decisions for soil and groundwater protection and remediation.
Resumo:
The interaction between atmosphere–land–ocean–biosphere systems plays a prominent role on the atmospheric dynamics and on the convective rainfall distribution over the West Africa monsoon area during the boreal summer. In particular, the initialization of convective systems in the Sub – Sahelian region has been directly linked to soil moisture heterogeneities identified as the major triggering, development and propagation of convective systems. The present study aims at investigating African monsoon large scale convective dynamics and rainfall diurnal cycle through an exploration of the hypothesis behind the mechanisms of a monsoon phenomenon as an emergence of a collective dynamics of many propagating convective systems. Such hypothesis is based on the existence of an internal self – regulation mechanism among the various components. To achieve these results a multiple analysis was performed based on remote sensed rainfall dataset, and global and regional modelling data for a period of 5 seasons: 2004 - 2008. Satellite rainfall data and convective occurrence variability were studied for assessing typical spatio – temporal signatures and characteristics with an emphasis to the diurnal cycle footprint. A global model and regional model simulation datasets, specifically developed for this analysis and based on Regional Atmospheric Modelling System – RAMS, have been analysed. Results from numerical model datasets highlight the evidence of a synchronization between the destabilization of the convective boundary layer and rainfall occurrence due to the solar radiation forcing through the latent heat release. This supports the conclusion that the studied interacting systems are associated with a process of mutual adjustment of rhythms. Furthermore, this rainfall internal coherence was studied in relation to the West African Heat Low pressure system, which has a prominent role in the large scale summer variability over the Mediterranean area since it is acting as one of dynamic link between sub tropical and midlatitudes variability.
Effect of drug physicochemical properties on the release from liposomal systems in vitro and in vivo
Resumo:
Liposomes were discovered about 40 years ago by A. Bangham and since then they became very versatile tools in biology, biochemistry and medicine. Liposomes are the smallest artificial vesicles of spherical shape that can be produced from natural untoxic phospholipids and cholesterol. Liposome vesicles can be used as drug carriers and become loaded with a great variety of molecules, such as small drug molecules, proteins, nucleotides and even plasmids. Due to the variability of liposomal compositions they can be used for a large number of applications. In this thesis the β-adrenoceptor antagonists propranolol, metoprolol, atenolol and pindolol, glucose, 18F-Fluorodeoxyglucose (FDG) and Er-DTPA were used for encapsulation in liposomes, characterization and in vitro release studies. Multilamellar vesicles (MLV), large unilamellar vesicles (LUV) and smaller unilamellar vesicles (SUV) were prepared using one of the following lipids: 1,2-Dimyristoyl-sn-Glycero-3-Phosphocholine (DMPC), 1,2-Distearoyl-sn-Glycero-3-Phosphocholine (DSPC), Phospholipone 90H (Ph90H) or a mixture of DSPC and DMPC (1:1). The freeze thawing method was used for preparation of liposomes because it has three advantages (1) avoiding the use of chloroform, which is used in other methods and causes toxicity (2) it is a simple method and (3) it gives high entrapping efficiency. The percentage of entrapping efficiencies (EE) was different depending on the type and phase transition temperature (Tc) of the lipid used. The average particle size and particle size distribution of the prepared liposomes were determined using both dynamic light scattering (DLS) and laser diffraction analyzer (LDA). The average particle size of the prepared liposomes differs according to both liposomal type and lipid type. Dispersion and dialysis techniques were used for the study of the in vitro release of β-adrenoceptor antagonists. The in vitro release rate of β-adrenoceptor antagonists was increased from MLV to LUV to SUV. Regarding the lipid type, β-adrenoceptor antagonists exhibited different in vitro release pattern from one lipid to another. Two different concentrations (50 and 100mg/ml) of Ph90H were used for studying the effect of lipid concentration on the in vitro release of β-adrenoceptor antagonists. It was found that liposomes made from 50 mg/ml Ph90H exhibited higher release rates than liposomes made at 100 mg/ml Ph90H. Also glucose was encapsulated in MLV, LUV and SUV using 1,2-Dimyristoyl-sn-Glycero-3-Phosphocholine (DMPC), 1,2-Distearoyl-sn-Glycero-3-Phosphocholine (DSPC), Phospholipone 90H (Ph90H), soybean lipid (Syb) or a mixture of DSPC and DMPC (1:1). The average particle size and size distribution were determined using laser diffraction analysis. It was found that both EE and average particle size differ depending on both lipid and liposomal types. The in vitro release of glucose from different types of liposomes was performed using a dispersion method. It was found that the in vitro release of glucose from different liposomes is dependent on the lipid type. 18F-FDG was encapsulated in MLV 1,2-Dimyristoyl-sn-Glycero-3-Phosphocholine (DMPC), 1,2-Distearoyl-sn-Glycero-3-Phosphocholine (DSPC), Phospholipone 90H (Ph90H), soybean lipid (Syb) or a mixture of DSPC and DMPC (1:1). FDG-containing LUV and SUV were prepared using Ph90H lipid. The in vitro release of FDG from the different types of lipids was accomplished using a dispersion method. Results similar to that of glucose release were obtained. In vivo imaging of FDG in both uncapsulated FDG and FDG-containing MLV was performed in the brain and the whole body of rats using PET scanner. It was found that the release of FDG from FDG-containing MLV was sustained. In vitro-In vivo correlation was studied using the in vitro release data of FDG from liposomes and in vivo absorption data of FDG from injected liposomes using microPET. Erbium, which is a lanthanide metal, was used as a chelate with DTPA for encapsulation in SUV liposomes for the indirect radiation therapy of cancer. The liposomes were prepared using three different concentrations of soybean lipid (30, 50 and 70 mg/ml). The stability of Er-DTPA SUV liposomes was carried out by storage of the prepared liposomes at three different temperatures (4, 25 and 37 °C). It was found that the release of Er-DTPA complex is temperature dependent, the higher the temperature, the higher the release. There was an inverse relationship between the release of the Er-DTPA complex and the concentration of lipid.
Resumo:
In the last few years the resolution of numerical weather prediction (nwp) became higher and higher with the progresses of technology and knowledge. As a consequence, a great number of initial data became fundamental for a correct initialization of the models. The potential of radar observations has long been recognized for improving the initial conditions of high-resolution nwp models, while operational application becomes more frequent. The fact that many nwp centres have recently taken into operations convection-permitting forecast models, many of which assimilate radar data, emphasizes the need for an approach to providing quality information which is needed in order to avoid that radar errors degrade the model's initial conditions and, therefore, its forecasts. Environmental risks can can be related with various causes: meteorological, seismical, hydrological/hydraulic. Flash floods have horizontal dimension of 1-20 Km and can be inserted in mesoscale gamma subscale, this scale can be modeled only with nwp model with the highest resolution as the COSMO-2 model. One of the problems of modeling extreme convective events is related with the atmospheric initial conditions, in fact the scale dimension for the assimilation of atmospheric condition in an high resolution model is about 10 Km, a value too high for a correct representation of convection initial conditions. Assimilation of radar data with his resolution of about of Km every 5 or 10 minutes can be a solution for this problem. In this contribution a pragmatic and empirical approach to deriving a radar data quality description is proposed to be used in radar data assimilation and more specifically for the latent heat nudging (lhn) scheme. Later the the nvective capabilities of the cosmo-2 model are investigated through some case studies. Finally, this work shows some preliminary experiments of coupling of a high resolution meteorological model with an Hydrological one.
Resumo:
This work presents exact algorithms for the Resource Allocation and Cyclic Scheduling Problems (RA&CSPs). Cyclic Scheduling Problems arise in a number of application areas, such as in hoist scheduling, mass production, compiler design (implementing scheduling loops on parallel architectures), software pipelining, and in embedded system design. The RA&CS problem concerns time and resource assignment to a set of activities, to be indefinitely repeated, subject to precedence and resource capacity constraints. In this work we present two constraint programming frameworks facing two different types of cyclic problems. In first instance, we consider the disjunctive RA&CSP, where the allocation problem considers unary resources. Instances are described through the Synchronous Data-flow (SDF) Model of Computation. The key problem of finding a maximum-throughput allocation and scheduling of Synchronous Data-Flow graphs onto a multi-core architecture is NP-hard and has been traditionally solved by means of heuristic (incomplete) algorithms. We propose an exact (complete) algorithm for the computation of a maximum-throughput mapping of applications specified as SDFG onto multi-core architectures. Results show that the approach can handle realistic instances in terms of size and complexity. Next, we tackle the Cyclic Resource-Constrained Scheduling Problem (i.e. CRCSP). We propose a Constraint Programming approach based on modular arithmetic: in particular, we introduce a modular precedence constraint and a global cumulative constraint along with their filtering algorithms. Many traditional approaches to cyclic scheduling operate by fixing the period value and then solving a linear problem in a generate-and-test fashion. Conversely, our technique is based on a non-linear model and tackles the problem as a whole: the period value is inferred from the scheduling decisions. The proposed approaches have been tested on a number of non-trivial synthetic instances and on a set of realistic industrial instances achieving good results on practical size problem.
Resumo:
Hochreichende Konvektion über Waldbränden ist eine der intensivsten Formen von atmosphärischer Konvektion. Die extreme Wolkendynamik mit hohen vertikalen Windgeschwindigkeiten (bis 20 m/s) bereits an der Wolkenbasis, hohen Wasserdampfübersättigungen (bis 1%) und die durch das Feuer hohen Anzahlkonzentration von Aerosolpartikeln (bis 100000 cm^-3) bilden einen besonderen Rahmen für Aerosol-Wolken Wechselwirkungen.Ein entscheidender Schritt in der mikrophysikalischen Entwicklung einer konvektiven Wolke ist die Aktivierung von Aerosolpartikeln zu Wolkentropfen. Dieser Aktivierungsprozess bestimmt die anfängliche Anzahl und Größe der Wolkentropfen und kann daher die Entwicklung einer konvektiven Wolke und deren Niederschlagsbildung beeinflussen. Die wichtigsten Faktoren, welche die anfängliche Anzahl und Größe der Wolkentropfen bestimmen, sind die Größe und Hygroskopizität der an der Wolkenbasis verfügbaren Aerosolpartikel sowie die vertikale Windgeschwindigkeit. Um den Einfluss dieser Faktoren unter pyro-konvektiven Bedingungen zu untersuchen, wurden numerische Simulationen mit Hilfe eines Wolkenpaketmodells mit detaillierter spektraler Beschreibung der Wolkenmikrophysik durchgeführt. Diese Ergebnisse können in drei unterschiedliche Bereiche abhängig vom Verhältnis zwischen vertikaler Windgeschwindigkeit und Aerosolanzahlkonzentration (w/NCN) eingeteilt werden: (1) ein durch die Aerosolkonzentration limitierter Bereich (hohes w/NCN), (2) ein durch die vertikale Windgeschwindigkeit limitierter Bereich (niedriges w/NCN) und (3) ein Übergangsbereich (mittleres w/NCN). Die Ergebnisse zeigen, dass die Variabilität der anfänglichen Anzahlkonzentration der Wolkentropfen in (pyro-) konvektiven Wolken hauptsächlich durch die Variabilität der vertikalen Windgeschwindigkeit und der Aerosolkonzentration bestimmt wird. rnUm die mikrophysikalischen Prozesse innerhalb der rauchigen Aufwindregion einer pyrokonvektiven Wolke mit einer detaillierten spektralen Mikrophysik zu untersuchen, wurde das Paketmodel entlang einer Trajektorie innerhalb der Aufwindregion initialisiert. Diese Trajektore wurde durch dreidimensionale Simulationen eines pyro-konvektiven Ereignisses durch das Model ATHAM berechnet. Es zeigt sich, dass die Anzahlkonzentration der Wolkentropfen mit steigender Aerosolkonzentration ansteigt. Auf der anderen Seite verringert sich die Größe der Wolkentropfen mit steigender Aerosolkonzentration. Die Reduzierung der Verbreiterung des Tropfenspektrums stimmt mit den Ergebnissen aus Messungen überein und unterstützt das Konzept der Unterdrückung von Niederschlag in stark verschmutzen Wolken.Mit Hilfe des Models ATHAM wurden die dynamischen und mikrophysikalischen Prozesse von pyro-konvektiven Wolken, aufbauend auf einer realistischen Parametrisierung der Aktivierung von Aerosolpartikeln durch die Ergebnisse der Aktivierungsstudie, mit zwei- und dreidimensionalen Simulationen untersucht. Ein modernes zweimomenten mikrophysikalisches Schema wurde in ATHAM implementiert, um den Einfluss der Anzahlkonzentration von Aerosolpartikeln auf die Entwicklung von idealisierten pyro-konvektiven Wolken in US Standardamtosphären für die mittleren Breiten und den Tropen zu untersuchen. Die Ergebnisse zeigen, dass die Anzahlkonzentration der Aerosolpartikel die Bildung von Regen beeinflusst. Für geringe Aerosolkonzentrationen findet die rasche Regenbildung hauptsächlich durch warme mikrophysikalische Prozesse statt. Für höhere Aerosolkonzentrationen ist die Eisphase wichtiger für die Bildung von Regen. Dies führt zu einem verspäteten Einsetzen von Niederschlag für verunreinigtere Atmosphären. Außerdem wird gezeigt, dass die Zusammensetzung der Eisnukleationspartikel (IN) einen starken Einfluss auf die dynamische und mikrophysikalische Struktur solcher Wolken hat. Bei sehr effizienten IN bildet sich Regen früher. Die Untersuchung zum Einfluss des atmosphärischen Hintergrundprofils zeigt eine geringe Auswirkung der Meteorologie auf die Sensitivität der pyro-konvektiven Wolken auf diernAerosolkonzentration. Zum Abschluss wird gezeigt, dass die durch das Feuer emittierte Hitze einen deutlichen Einfluss auf die Entwicklung und die Wolkenobergrenze von pyro-konvektive Wolken hat. Zusammenfassend kann gesagt werden, dass in dieser Dissertation die Mikrophysik von pyrokonvektiven Wolken mit Hilfe von idealisierten Simulation eines Wolkenpaketmodell mit detaillierte spektraler Mikrophysik und eines 3D Modells mit einem zweimomenten Schema im Detail untersucht wurde. Es wird gezeigt, dass die extremen Bedingungen im Bezug auf die vertikale Windgeschwindigkeiten und Aerosolkonzentrationen einen deutlichen Einfluss auf die Entwicklung von pyro-konvektiven Wolken haben.
Resumo:
Der Begriff "Bannerwolke" bezeichnet ein eindrucksvolles Phänomen aus dem Bereich der Gebirgsmeteorologie. Bannerwolken können gelegentlich im Hochgebirge im Bereich steiler Bergspitzen oder langgezogener Bergrücken, wie z.B. dem Matterhorn in den Schweizer Alpen oder dem Zugspitzgrat in den Bayrischen Alpen beobachtet werden. Der Begriff bezeichnet eine Banner- oder Fahnen-ähnliche Wolkenstruktur, welche an der windabgewandten Seite des Berges befestigt zu sein scheint, während die windzugewandte Seite vollkommen wolkenfrei ist. Bannerwolken fanden bislang, trotz ihres relativ häufigen Auftretens in der wissenschaftlichen Literatur kaum Beachtung. Entsprechend wenig ist über ihren Entstehungsmechanismus und insbesondere die relative Bedeutung dynamischer gegenüber thermodynamischer Prozesse bekannt. In der wissenschaftlichen Literatur wurden bislang 3 unterschiedliche Mechanismen postuliert, um die Entstehung von Bannerwolken zu erklären. Demnach entstehen Bannerwolken durch (a) den Bernoulli-Effekt, insbesondere durch die lokale adiabatische Kühlung hervorgerufen durch eine Druckabnahme entlang quasi-horizontal verlaufender, auf der windzugewandten Seite startender Trajektorien, (b) durch isobare Mischung bodennaher kälterer Luft mit wärmerer Luft aus höheren Schichten, oder (c) durch erzwungene Hebung im aufsteigenden Ast eines Leerotors. Ziel dieser Arbeit ist es, ein besseres physikalisches Verständnis für das Phänomen der Bannerwolke zu entwickeln. Das Hauptaugenmerk liegt auf dem dominierenden Entstehungsmechanismus, der relativen Bedeutung dynamischer und thermodynamischer Prozesse, sowie der Frage nach geeigneten meteorologischen Bedingungen. Zu diesem Zweck wurde ein neues Grobstruktursimulations (LES)-Modell entwickelt, welches geeignet ist turbulente, feuchte Strömungen in komplexem Terrain zu untersuchen. Das Modell baut auf einem bereits existierenden mesoskaligen (RANS) Modell auf. Im Rahmen dieser Arbeit wurde das neue Modell ausführlich gegen numerische Referenzlösungen und Windkanal-Daten verglichen. Die wesentlichen Ergebnisse werden diskutiert, um die Anwendbarkeit des Modells auf die vorliegende wissenschaftliche Fragestellung zu überprüfen und zu verdeutlichen. Die Strömung über eine idealisierte pyramidenförmige Bergspitze wurde für Froude-Zahlen Fr >> 1 sowohl auf Labor- als auch atmosphärischer Skala mit und ohne Berücksichtigung der Feuchtephysik untersucht. Die Simulationen zeigen, dass Bannerwolken ein primär dynamisches Phänomen darstellen. Sie entstehen im Lee steiler Bergspitzen durch dynamisch erzwungene Hebung. Die Simulationen bestätigen somit die Leerotor-Theorie. Aufgrund des stark asymmetrischen, Hindernis-induzierten Strömungsfeldes können Bannerwolken sogar im Falle horizontal homogener Anfangsbedingungen hinsichtlich Feuchte und Temperatur entstehen. Dies führte zu der neuen Erkenntnis, dass zusätzliche leeseitige Feuchtequellen, unterschiedliche Luftmassen in Luv und Lee, oder Strahlungseffekte keine notwendige Voraussetzung für die Entstehung einer Bannerwolke darstellen. Die Wahrscheinlichkeit der Bannerwolkenbildung steigt mit zunehmender Höhe und Steilheit des pyramidenförmigen Hindernisses und ist in erster Näherung unabhängig von dessen Orientierung zur Anströmung. Simulationen mit und ohne Berücksichtigung der Feuchtephysik machen deutlich, dass thermodynamische Prozesse (insbes. die Umsetzung latenter Wärme) für die Dynamik prototypischer (nicht-konvektiver) Bannerwolken zweitrangig ist. Die Verstärkung des aufsteigenden Astes im Lee und die resultierende Wolkenbildung, hervorgerufen durch die Freisetzung latenter Wärme, sind nahezu vernachlässigbar. Die Feuchtephysik induziert jedoch eine Dipol-ähnliche Struktur im Vertikalprofil der Brunt-Väisälä Frequenz, was zu einem moderaten Anstieg der leeseitigen Turbulenz führt. Es wird gezeigt, dass Gebirgswellen kein entscheidendes Ingredienz darstellen, um die Dynamik von Bannerwolken zu verstehen. Durch eine Verstärkung der Absinkbewegung im Lee, haben Gebirgswellen lediglich die Tendenz die horizontale Ausdehnung von Bannerwolken zu reduzieren. Bezüglich geeigneter meteorologischer Bedingungen zeigen die Simulationen, dass unter horizontal homogenen Anfangsbedingungen die äquivalentpotentielle Temperatur in der Anströmung mit der Höhe abnehmen muss. Es werden 3 notwendige und hinreichende Kriterien, basierend auf dynamischen und thermodynamischen Variablen vorgestellt, welche einen weiteren Einblick in geeignete meteorologische Bedingungen geben.
Resumo:
In this work a modelization of the turbulence in the atmospheric boundary layer, under convective condition, is made. For this aim, the equations that describe the atmospheric motion are expressed through Reynolds averages and, then, they need closures. This work consists in modifying the TKE-l closure used in the BOLAM (Bologna Limited Area Model) forecast model. In particular, the single column model extracted from BOLAM is used, which is modified to obtain other three different closure schemes: a non-local term is added to the flux- gradient relations used to close the second order moments present in the evolution equation of the turbulent kinetic energy, so that the flux-gradient relations become more suitable for simulating an unstable boundary layer. Furthermore, a comparison among the results obtained from the single column model, the ones obtained from the three new schemes and the observations provided by the known case in literature ”GABLS2” is made.