868 resultados para Elasticità Coordinazione Cloud Respect SYBL


Relevância:

20.00% 20.00%

Publicador:

Resumo:

L'obbiettivo che ci poniamo con questa tesi è quello di esplorare il mondo del Cloud Computing, cercando di capire le principali caratteristiche architetturali e vedere in seguito i componenti fondamentali che si occupano di trasformare una infrastruttura informatica in un'infrastruttura cloud, ovvero i Cloud Operating System.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sviluppo applicazione per condividere contatti da SugarCRM con Google Contacts

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study the Aerodyne Aerosol Mass Spectrometer (AMS) was used during three laboratory measurement campaigns, FROST1, FROST2 and ACI-03. The FROST campaigns took place at the Leipzig Aerosol Cloud Interaction Simulator (LACIS) at the IfT in Leipzig and the ACI-03 campaign was conducted at the AIDA facility at the Karlsruhe Institute of Technology (KIT). In all three campaigns, the effect of coatings on mineral dust ice nuclei (IN) was investigated. During the FROST campaigns, Arizona Test Dust (ATD) particles of 200, 300 and 400 nm diameter were coated with thin coatings (< 7 nm) of sulphuric acid. At these very thin coatings, the AMS was operated close to its detection limits. Up to now it was not possible to accurately determine AMS detection limits during regular measurements. Therefore, the mathematical tools to analyse the detection limits of the AMS have been improved in this work. It is now possible to calculate detection limits of the AMS under operating conditions, without losing precious time by sampling through a particle filter. The instrument was characterised in more detail to enable correct quantification of the sulphate loadings on the ATD particle surfaces. Correction factors for the instrument inlet transmission, the collection efficiency, and the relative ionisation efficiency have been determined. With these corrections it was possible to quantify the sulphate mass per particle on the ATD after the condensation of sulphuric acid on its surface. The AMS results have been combined with the ice nucleus counter results. This revealed that the IN-efficiency of ATD is reduced when it is coated with sulphuric acid. The reason for this reduction is a chemical reaction of sulphuric acid with the particle's surface. These reactions are increasingly taking place when the aerosol is humidified or heated after the coating with sulphuric acid. A detailed analysis of the solubility and the evaporation temperature of the surface reaction products revealed that most likely aluminium sulphate is produced in these reactions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Il cloud permette di condividere in maniera veloce ed intuitiva non solo le informazioni, ma anche dati e programmi aumentando notevolmente le capacità di collaborazione di tutti, partendo da chi lo utilizza a livello personale per arrivare a chi lo utilizza a livello professionale. È possibile creare le proprie applicazioni aziendali e farle comunicare remotamente con le soluzioni cloud presenti sul mercato, attraverso gli strumenti messi a disposizione dai fornitori. Le applicazioni stesse potranno essere pubblicate su server anch’essi cloud, gestiti interamente da aziende esterne, che permettono una forma di pagamento in base all’effettivo utilizzo e libera dalle implicazioni della gestione di un server. In questo scenario verrà svolta una ricerca sulle caratteristiche di alcuni servizi cloud esistenti, in particolare riguardo ai software di gestione dei documenti, e verrà creata un’applicazione che comunica con essi sfruttandone le potenzialità.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This PhD thesis discusses the impact of Cloud Computing infrastructures on Digital Forensics in the twofold role of target of investigations and as a helping hand to investigators. The Cloud offers a cheap and almost limitless computing power and storage space for data which can be leveraged to commit either new or old crimes and host related traces. Conversely, the Cloud can help forensic examiners to find clues better and earlier than traditional analysis applications, thanks to its dramatically improved evidence processing capabilities. In both cases, a new arsenal of software tools needs to be made available. The development of this novel weaponry and its technical and legal implications from the point of view of repeatability of technical assessments is discussed throughout the following pages and constitutes the unprecedented contribution of this work

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Large Magellanic Cloud (LMC) is widely considered as the first step of the cosmological distance ladder, since it contains many different distance indicators. An accurate determination of the distance to the LMC allows one to calibrate these distance indicators that are then used to measure the distance to far objects. The main goal of this thesis is to study the distance and structure of the LMC, as traced by different distance indicators. For these purposes three types of distance indicators were chosen: Classical Cepheids,``hot'' eclipsing binaries and RR Lyrae stars. These objects belong to different stellar populations tracing, in turn, different sub-structures of the LMC. The RR Lyrae stars (age >10 Gyr) are distributed smoothly and likely trace the halo of the LMC. Classical Cepheids are young objects (age 50-200 Myr), mainly located in the bar and spiral arm of the galaxy, while ``hot'' eclipsing binaries mainly trace the star forming regions of the LMC. Furthermore, we have chosen these distance indicators for our study, since the calibration of their zero-points is based on fundamental geometric methods. The ESA cornerstone mission Gaia, launched on 19 December 2013, will measure trigonometric parallaxes for one billion stars with an accuracy of 20 micro-arcsec at V=15 mag, and 200 micro-arcsec at V=20 mag, thus will allow us to calibrate the zero-points of Classical Cepheids, eclipsing binaries and RR Lyrae stars with an unprecedented precision.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sulfate aerosol plays an important but uncertain role in cloud formation and radiative forcing of the climate, and is also important for acid deposition and human health. The oxidation of SO2 to sulfate is a key reaction in determining the impact of sulfate in the environment through its effect on aerosol size distribution and composition. This thesis presents a laboratory investigation of sulfur isotope fractionation during SO2 oxidation by the most important gas-phase and heterogeneous pathways occurring in the atmosphere. The fractionation factors are then used to examine the role of sulfate formation in cloud processing of aerosol particles during the HCCT campaign in Thuringia, central Germany. The fractionation factor for the oxidation of SO2 by ·OH radicals was measured by reacting SO2 gas, with a known initial isotopic composition, with ·OH radicals generated from the photolysis of water at -25, 0, 19 and 40°C (Chapter 2). The product sulfate and the residual SO2 were collected as BaSO4 and the sulfur isotopic compositions measured with the Cameca NanoSIMS 50. The measured fractionation factor for 34S/32S during gas phase oxidation is αOH = (1.0089 ± 0.0007) − ((4 ± 5) × 10−5 )T (°C). Fractionation during oxidation by major aqueous pathways was measured by bubbling the SO2 gas through a solution of H2 O2

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Uno dei temi più discussi ed interessanti nel mondo dell’informatica al giorno d’oggi è sicuramente il Cloud Computing. Nuove organizzazioni che offrono servizi di questo tipo stanno nascendo ovunque e molte aziende oggi desiderano imparare ad utilizzarli, migrando i loro centri di dati e le loro applicazioni nel Cloud. Ciò sta avvenendo anche grazie alla spinta sempre più forte che stanno imprimendo le grandi compagnie nella comunità informatica: Google, Amazon, Microsoft, Apple e tante altre ancora parlano sempre più frequentemente di Cloud Computing e si stanno a loro volta ristrutturando profondamente per poter offrire servizi Cloud adeguandosi così a questo grande cambiamento che sta avvenendo nel settore dell’informatica. Tuttavia il grande movimento di energie, capitali, investimenti ed interesse che l’avvento del Cloud Computing sta causando non aiuta a comprendere in realtà che cosa esso sia, al punto tale che oggi non ne esiste ancora una definizione univoca e condivisa. La grande pressione inoltre che esso subisce da parte del mondo del mercato fa sì che molte delle sue più peculiari caratteristiche, dal punto di vista dell’ingegneria del software, vengano nascoste e soverchiate da altre sue proprietà, architetturalmente meno importanti, ma con un più grande impatto sul pubblico di potenziali clienti. Lo scopo che mi propongo con questa tesi è quello quindi di cercare di fare chiarezza in quello che è il mondo del Cloud computing, focalizzandomi particolarmente su quelli che sono i design pattern più utilizzati nello sviluppo di applicazioni di tipo cloud e presentando quelle che oggi rappresentano le principali tecnologie che vengono utilizzate sia in ambito professionale, che in ambito di ricerca, per realizzare le applicazioni cloud, concentrandomi in maniera particolare su Microsoft Orleans.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La natura distribuita del Cloud Computing, che comporta un'elevata condivisione delle risorse e una moltitudine di accessi ai sistemi informatici, permette agli intrusi di sfruttare questa tecnologia a scopi malevoli. Per contrastare le intrusioni e gli attacchi ai dati sensibili degli utenti, vengono implementati sistemi di rilevamento delle intrusioni e metodi di difesa in ambiente virtualizzato, allo scopo di garantire una sicurezza globale fondata sia sul concetto di prevenzione, sia su quello di cura: un efficace sistema di sicurezza deve infatti rilevare eventuali intrusioni e pericoli imminenti, fornendo una prima fase difensiva a priori, e, al contempo, evitare fallimenti totali, pur avendo subito danni, e mantenere alta la qualità del servizio, garantendo una seconda fase difensiva, a posteriori. Questa tesi illustra i molteplici metodi di funzionamento degli attacchi distribuiti e dell'hacking malevolo, con particolare riferimento ai pericoli di ultima generazione, e definisce le principali strategie e tecniche atte a garantire sicurezza, protezione e integrità dei dati all'interno di un sistema Cloud.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Polare Stratosphärenwolken (PSC), die unterhalb einer Temperatur von etwa -78 °C in polaren Regionen auftreten, üben einen starken Einfluss auf die stratosphärische Ozonschicht aus. Dieser Einfluss erfolgt größtenteils über heterogene chemische Reaktionen, die auf den Oberflächen von Wolkenpartikeln stattfinden. Chemische Reaktionen die dabei ablaufen sind eine Voraussetzung für den späteren Ozonabbau. Des Weiteren verändert die Sedimentation der Wolkenpartikel die chemische Zusammensetzung bzw. die vertikale Verteilung der Spurengase in der Stratosphäre. Für die Ozonchemie spielt dabei die Beseitigung von reaktivem Stickstoff durch Sedimentation Salpetersäure-haltiger Wolkenpartikeln (Denitrifizierung) eine wichtige Rolle. Durch gleichen Sedimentationsprozess von PSC Elementen wird der Stratosphäre des weiteren Wasserdampf entzogen (Dehydrierung). Beide Prozesse begünstigen einen länger andauernden stratosphärischen Ozonabbau im polaren Frühling.rnGerade im Hinblick auf die Denitrifikation durch Sedimentation größerer PSC-Partikel werden in dieser Arbeit neue Resultate von in-situ Messungen vorgestellt, die im Rahmen der RECONCILE-Kampagne im Winter des Jahres 2010 an Bord des Höhenforschungs-Flugzeugs M-55 Geophysica durchgeführt wurden. Dabei wurden in fünf Flügen Partikelgrößenverteilungen in einem Größenbereich zwischen 0,5 und 35 µm mittels auf der Lichtstreuung basierender Wolkenpartikel-Spektrometer gemessen. Da polare Stratosphärenwolken in Höhen zwischen 17 und 30 km auftreten, sind in-situ Messungen vergleichsweise selten, so dass noch einige offene Fragen bestehen bleiben. Gerade Partikel mit optischen Durchmessern von bis zu 35µm, die während der neuen Messungen detektiert wurden, müssen mit theoretischen Einschränkungen in Einklang gebracht werden. Die Größe der Partikel wird dabei durch die Verfügbarkeit der beteiligten Spurenstoffe (Wasserdampf und Salpetersäure), die Sedimentationsgeschwindigkeit, Zeit zum Anwachsen und von der Umgebungstemperatur begrenzt. Diese Faktoren werden in der vorliegenden Arbeit diskutiert. Aus dem gemessenen Partikelvolumen wird beispielsweise unter der Annahme der NAT-Zusammensetzung (Nitric Acid Trihydrate) die äquivalente Konzentration des HNO 3 der Gasphase berechnet. Im Ergebnis wird die verfügbare Konzentration von Salpetersäure der Stratosphäre überschritten. Anschließend werden Hypothesen diskutiert, wodurch das gemessene Partikelvolumen überschätzt worden sein könnte, was z.B. im Fall einer starken Asphärizität der Partikel möglich wäre. Weiterhin wurde eine Partikelmode unterhalb von 2-3µm im Durchmesser aufgrund des Temperaturverhaltens als STS (Supercooled Ternary Solution droplets) identifiziert.rnUm die Konzentration der Wolkenpartikel anhand der Messung möglichst genau berechnen zu können, muss das Messvolumen bzw. die effektive Messfläche der Instrumente bekannt sein. Zum Vermessen dieser Messfläche wurde ein Tröpfchengenerator aufgebaut und zum Kalibrieren von drei Instrumenten benutzt. Die Kalibration mittels des Tröpfchengenerators konzentrierte sich auf die Cloud Combination Probe (CCP). Neben der Messfläche und der Größenbestimmung der Partikel werden in der Arbeit unter Zuhilfenahme von Messungen in troposphärischen Wolken und an einer Wolkensimulationskammer auch weitere Fehlerquellen der Messung untersucht. Dazu wurde unter anderem die statistische Betrachtung von Intervallzeiten einzelner Messereignisse, die in neueren Sonden aufgezeichnet werden, herangezogen. Letzteres ermöglicht es, Messartefakte wie Rauschen, Koinzidenzfehler oder „Shattering“ zu identifizieren.rn

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nella fisica delle particelle, onde poter effettuare analisi dati, è necessario disporre di una grande capacità di calcolo e di storage. LHC Computing Grid è una infrastruttura di calcolo su scala globale e al tempo stesso un insieme di servizi, sviluppati da una grande comunità di fisici e informatici, distribuita in centri di calcolo sparsi in tutto il mondo. Questa infrastruttura ha dimostrato il suo valore per quanto riguarda l'analisi dei dati raccolti durante il Run-1 di LHC, svolgendo un ruolo fondamentale nella scoperta del bosone di Higgs. Oggi il Cloud computing sta emergendo come un nuovo paradigma di calcolo per accedere a grandi quantità di risorse condivise da numerose comunità scientifiche. Date le specifiche tecniche necessarie per il Run-2 (e successivi) di LHC, la comunità scientifica è interessata a contribuire allo sviluppo di tecnologie Cloud e verificare se queste possano fornire un approccio complementare, oppure anche costituire una valida alternativa, alle soluzioni tecnologiche esistenti. Lo scopo di questa tesi è di testare un'infrastruttura Cloud e confrontare le sue prestazioni alla LHC Computing Grid. Il Capitolo 1 contiene un resoconto generale del Modello Standard. Nel Capitolo 2 si descrive l'acceleratore LHC e gli esperimenti che operano a tale acceleratore, con particolare attenzione all’esperimento CMS. Nel Capitolo 3 viene trattato il Computing nella fisica delle alte energie e vengono esaminati i paradigmi Grid e Cloud. Il Capitolo 4, ultimo del presente elaborato, riporta i risultati del mio lavoro inerente l'analisi comparata delle prestazioni di Grid e Cloud.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Il Data Distribution Management (DDM) è un componente dello standard High Level Architecture. Il suo compito è quello di rilevare le sovrapposizioni tra update e subscription extent in modo efficiente. All'interno di questa tesi si discute la necessità di avere un framework e per quali motivi è stato implementato. Il testing di algoritmi per un confronto equo, librerie per facilitare la realizzazione di algoritmi, automatizzazione della fase di compilazione, sono motivi che sono stati fondamentali per iniziare la realizzazione framework. Il motivo portante è stato che esplorando articoli scientifici sul DDM e sui vari algoritmi si è notato che in ogni articolo si creavano dei dati appositi per fare dei test. L'obiettivo di questo framework è anche quello di riuscire a confrontare gli algoritmi con un insieme di dati coerente. Si è deciso di testare il framework sul Cloud per avere un confronto più affidabile tra esecuzioni di utenti diversi. Si sono presi in considerazione due dei servizi più utilizzati: Amazon AWS EC2 e Google App Engine. Sono stati mostrati i vantaggi e gli svantaggi dell'uno e dell'altro e il motivo per cui si è scelto di utilizzare Google App Engine. Si sono sviluppati quattro algoritmi: Brute Force, Binary Partition, Improved Sort, Interval Tree Matching. Sono stati svolti dei test sul tempo di esecuzione e sulla memoria di picco utilizzata. Dai risultati si evince che l'Interval Tree Matching e l'Improved Sort sono i più efficienti. Tutti i test sono stati svolti sulle versioni sequenziali degli algoritmi e che quindi ci può essere un riduzione nel tempo di esecuzione per l'algoritmo Interval Tree Matching.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present work belongs to the PRANA project, the first extensive field campaign of observation of atmospheric emission spectra covering the Far InfraRed spectral region, for more than two years. The principal deployed instrument is REFIR-PAD, a Fourier transform spectrometer used by us to study Antarctic cloud properties. A dataset covering the whole 2013 has been analyzed and, firstly, a selection of good quality spectra is performed, using, as thresholds, radiance values in few chosen spectral regions. These spectra are described in a synthetic way averaging radiances in selected intervals, converting them into BTs and finally considering the differences between each pair of them. A supervised feature selection algorithm is implemented with the purpose to select the features really informative about the presence, the phase and the type of cloud. Hence, training and test sets are collected, by means of Lidar quick-looks. The supervised classification step of the overall monthly datasets is performed using a SVM. On the base of this classification and with the help of Lidar observations, 29 non-precipitating ice cloud case studies are selected. A single spectrum, or at most an average over two or three spectra, is processed by means of the retrieval algorithm RT-RET, exploiting some main IR window channels, in order to extract cloud properties. Retrieved effective radii and optical depths are analyzed, to compare them with literature studies and to evaluate possible seasonal trends. Finally, retrieval output atmospheric profiles are used as inputs for simulations, assuming two different crystal habits, with the aim to examine our ability to reproduce radiances in the FIR. Substantial mis-estimations are found for FIR micro-windows: a high variability is observed in the spectral pattern of simulation deviations from measured spectra and an effort to link these deviations to cloud parameters has been performed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study aims at a comprehensive understanding of the effects of aerosol-cloud interactions and their effects on cloud properties and climate using the chemistry-climate model EMAC. In this study, CCN activation is regarded as the dominant driver in aerosol-cloud feedback loops in warm clouds. The CCN activation is calculated prognostically using two different cloud droplet nucleation parameterizations, the STN and HYB CDN schemes. Both CDN schemes account for size and chemistry effects on the droplet formation based on the same aerosol properties. The calculation of the solute effect (hygroscopicity) is the main difference between the CDN schemes. The kappa-method is for the first time incorporated into Abdul-Razzak and Ghan activation scheme (ARG) to calculate hygroscopicity and critical supersaturation of aerosols (HYB), and the performance of the modied scheme is compared with the osmotic coefficient model (STN), which is the standard in the ARG scheme. Reference simulations (REF) with the prescribed cloud droplet number concentration have also been carried out in order to understand the effects of aerosol-cloud feedbacks. In addition, since the calculated cloud coverage is an important determinant of cloud radiative effects and is influencing the nucleation process two cloud cover parameterizations (i.e., a relative humidity threshold; RH-CLC and a statistical cloud cover scheme; ST-CLC) have been examined together with the CDN schemes, and their effects on the simulated cloud properties and relevant climate parameters have been investigated. The distinct cloud droplet spectra show strong sensitivity to aerosol composition effects on cloud droplet formation in all particle sizes, especially for the Aitken mode. As Aitken particles are the major component of the total aerosol number concentration and CCN, and are most sensitive to aerosol chemical composition effect (solute effect) on droplet formation, the activation of Aitken particles strongly contribute to total cloud droplet formation and thereby providing different cloud droplet spectra. These different spectra influence cloud structure, cloud properties, and climate, and show regionally varying sensitivity to meteorological and geographical condition as well as the spatiotemporal aerosol properties (i.e., particle size, number, and composition). The changes responding to different CDN schemes are more pronounced at lower altitudes than higher altitudes. Among regions, the subarctic regions show the strongest changes, as the lower surface temperature amplifies the effects of the activated aerosols; in contrast, the Sahara desert, where is an extremely dry area, is less influenced by changes in CCN number concentration. The aerosol-cloud coupling effects have been examined by comparing the prognostic CDN simulations (STN, HYB) with the reference simulation (REF). Most pronounced effects are found in the cloud droplet number concentration, cloud water distribution, and cloud radiative effect. The aerosol-cloud coupling generally increases cloud droplet number concentration; this decreases the efficiency of the formation of weak stratiform precipitation, and increases the cloud water loading. These large-scale changes lead to larger cloud cover and longer cloud lifetime, and contribute to high optical thickness and strong cloud cooling effects. This cools the Earth's surface, increases atmospheric stability, and reduces convective activity. These changes corresponding to aerosol-cloud feedbacks are also differently simulated depending on the cloud cover scheme. The ST-CLC scheme is more sensitive to aerosol-cloud coupling, since this scheme uses a tighter linkage of local dynamics and cloud water distributions in cloud formation process than the RH-CLC scheme. For the calculated total cloud cover, the RH-CLC scheme simulates relatively similar pattern to observations than the ST-CLC scheme does, but the overall properties (e.g., total cloud cover, cloud water content) in the RH simulations are overestimated, particularly over ocean. This is mainly originated from the difference in simulated skewness in each scheme: the RH simulations calculate negatively skewed distributions of cloud cover and relevant cloud water, which is similar to that of the observations, while the ST simulations yield positively skewed distributions resulting in lower mean values than the RH-CLC scheme does. The underestimation of total cloud cover over ocean, particularly over the intertropical convergence zone (ITCZ) relates to systematic defficiency of the prognostic calculation of skewness in the current set-ups of the ST-CLC scheme.rnOverall, the current EMAC model set-ups perform better over continents for all combinations of the cloud droplet nucleation and cloud cover schemes. To consider aerosol-cloud feedbacks, the HYB scheme is a better method for predicting cloud and climate parameters for both cloud cover schemes than the STN scheme. The RH-CLC scheme offers a better simulation of total cloud cover and the relevant parameters with the HYB scheme and single-moment microphysics (REF) than the ST-CLC does, but is not very sensitive to aerosol-cloud interactions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ziel der vorliegenden Dissertation war es, Einblicke in das Kristallisationsverhalten weicher Materie („soft matter“), wie verschiedener Polymere oder Wasser, unter räumlicher Einschränkung („confinement“) zu erlangen. Dabei sollte untersucht werden, wie, weshalb und wann die Kristallisation in nanoporösen Strukturen eintritt. Desweiteren ist Kristallisation weicher Materie in nanoporösen Strukturen nicht nur aus Aspekten der Grundlagenforschung von großem Interesse, sondern es ergeben sich zahlreiche praktische Anwendungen. Durch die gezielte Steuerung der Kristallinität von Polymeren könnten somit Materialien mit verschiendenen mechanischen und optischen Eigenschaften erhalten werden. Desweiteren wurde auch räumlich eingeschränktes Wasser untersucht. Dieses spielt eine wichtige Rolle in der Molekularbiologie, z.B. für das globuläre Protein, und als Wolkenkondensationskeime in der Atmosphärenchemie und Physik. Auch im interstellaren Raum ist eingeschränktes Wasser in Form von Eispartikeln anzutreffen. Die Kristallisation von eingeschränktem Wasser zu verstehen und zu beeinflussen ist letztlich auch für die Haltbarkeit von Baumaterialien wie etwa Zement von großem Interesse.rnUm dies zu untersuchen wird Wasser in der Regel stark abgekühlt und das Kristallisationsverhalten in Abhängigkeit des Volumens untersucht. Dabei wurde beobachtet, dass Mikro- bzw. Nanometer große Volumina erst ab -38 °C bzw. -70 °C kristallisieren. Wasser unterliegt dabei in der Regel dem Prozess der homogenen Nukleation. In der Regel gefriert Wasser aber bei höheren Temperaturen, da durch Verunreinigungen eine vorzeitige, heterogene Nukleation eintritt.rnDie vorliegende Arbeit untersucht die sachdienlichen Phasendiagramme von kristallisierbaren Polymeren und Wasser unter räumlich eingeschränkten Bedingungen. Selbst ausgerichtetes Aluminiumoxid (AAO) mit Porengrößen im Bereich von 25 bis 400 nm wurden als räumliche Einschränkung sowohl für Polymere als auch für Wasser gewählt. Die AAO Nanoporen sind zylindrisch und parallel ausgerichtet. Außerdem besitzen sie eine gleichmäßige Porenlänge und einen gleichmäßigen Durchmesser. Daher eignen sie sich als Modelsystem um Kristallisationsprozesse unter wohldefinierter räumlicher Einschränkung zu untersuchen.rnEs wurden verschiedene halbkristalline Polymere verwendet, darunter Poly(ethylenoxid), Poly(ɛ-Caprolacton) und Diblockcopolymere aus PEO-b-PCL. Der Einfluss der Porengröße auf die Nukleation wurde aus verschiedenen Gesichtspunkten untersucht: (i) Einfluss auf den Nukleationmechanismus (heterogene gegenüber homogener Nukleation), (ii) Kristallorientierung und Kristallinitätsgrad und (iii) Zusammenhang zwischen Kristallisationstemperatur bei homogener Kristallisation und Glasübergangstemperatur.rnEs konnte gezeigt werden, dass die Kristallisation von Polymeren in Bulk durch heterogene Nukleation induziert wird und das die Kristallisation in kleinen Poren hauptsächlich über homogene Nukleation mit reduzierter und einstellbarer Kristallinität verläuft und eine hohe Kristallorientierung aufweist. Durch die AAOs konnte außerdem die kritische Keimgröße für die Kristallisation der Polymere abgeschätzt werden. Schließlich wurde der Einfluss der Polydispersität, von Oligomeren und anderen Zusatzstoffen auf den Nukleationsmechanismus untersucht.rn4rnDie Nukleation von Eis wurde in den selben AAOs untersucht und ein direkter Zusammenhang zwischen dem Nukleationstyp (heterogen bzw. homogen) und der gebildeten Eisphase konnte beobachtet werden. In größeren Poren verlief die Nukleation heterogen, wohingegen sie in kleineren Poren homogen verlief. Außerdem wurde eine Phasenumwandlung des Eises beobachtet. In den größeren Poren wurde hexagonales Eis nachgewiesen und unter einer Porengröße von 35 nm trat hauptsächlich kubisches Eis auf. Nennenswerter Weise handelte es sich bei dem kubischem Eis nicht um eine metastabile sondern eine stabile Phase. Abschließend wird ein Phasendiagramm für räumlich eingeschränktes Wasser vorgeschlagen. Dieses Phasendiagramm kann für technische Anwendungen von Bedeutung sein, so z.B. für Baumaterial wie Zement. Als weiteres Beispiel könnten AAOs, die die heterogene Nukleation unterdrücken (Porendurchmesser ≤ 35 nm) als Filter für Reinstwasser zum Einsatz kommen.rnNun zur Anfangs gestellten Frage: Wie unterschiedlich sind Wasser und Polymerkristallisation voneinander unter räumlicher Einschränkung? Durch Vergleich der beiden Phasendiagramme kommen wir zu dem Schluss, dass beide nicht fundamental verschieden sind. Dies ist zunächst verwunderlich, da Wasser ein kleines Molekül ist und wesentlich kleiner als die kleinste Porengröße ist. Wasser verfügt allerdings über starke Wasserstoffbrückenbindungen und verhält sich daher wie ein Polymer. Daher auch der Name „Polywasser“.